INTERNATIONAL SEMINAR ON
NUCLEAR WAR AND PLANETARY EMERGENCIES 30th Session: ANNIVERSARY CELEBRATIONS:THE PONTIFICALACADEMY OF SCIENCES4WlX -THE “€TIORE MAIORANA‘ FOUNDATION AND CENTRE FOR SCIENTlf IC CULTURE 4OTH - H H JOHN PAUL I1 AI’OSTOLATE 25TH - CLIMATE/GLOUAL WARMING THE COSMIC RAY EFFECT; EFFECTS ON SPECIES AND BIODIVERSITY;HUMAN EFFECTS;PALEOCLIMATE IMPLICATIONS EVIDENCT FOR GLOBAL WARMING -POLLUTION: E ” 3 I N E DISRUFflNG CHEMICALS; HAZARDOUS MATERIAL; LEGACY WASTES AND RADIOACTIVEWASTE MANAGEMENT IN USA, EUROPE, SOUTHEASTASIA AND JAPAN -THE CULTURALPLANETARY EMERGENCY: ROLE OF THE MEDIA: INTOLERANCE;TERRORISM; IRAQI PERSPECTIVE;OPEN FORUM DEBATE -AIDS AND INFEcnOUS DISEASES: R W C S IN MEDICINE;AIDS VACCINE SIIWEGIES -WATER WATER CONFLICTS IN THE MIDDLE EAST -ENERGY: DEVELOPING COUNTRIES; MITIGATION O f GREENHOUSEWARMING -PERMANENT MONITORING PANELS REPORTS-WORKSHOPS: LONG-TERMSTEWARDSHIPOF HAZARDOUS MATERIAL; AIDS VACCINE STRATEGIESAND ETIHICS
THE SCIENCE AND CULTURE SERIES Nuclear Strategy and Peace Technology
Series Editor: Antonino Zichichi
-
International Seminar on Nuclear War - 1st Session: The World-wide Implications of Nuclear War
1982
1984 -
International Seminar on Nuclear War - 2nd Session: How to Avoid a Nuclear War
1983
International Seminar on Nuclear War - 3rd Session: The Technical Basis for Peace
1981
International Seminar on Nuclear War - 4th Session: The Nuclear Winter and the New Defence Systems: Problems and Perspectives
1985 - International Seminar on Nuclear War - 5th Session: SDI, Computer Simulation, New Proposals to Stop the Arms Race 1986
-
International Seminar on Nuclear War - 6th Session: InternationalCooperation: The Alternatives
1987
-
International Seminar on Nuclear War - 7th Session: The Great Projects for Scientific Collaboration East-West-North-South
1988
-
International Seminar on Nuclear War - 8th Session: The New Threats: Space and Chemical Weapons - What Can be Done with the Retired I.N.F. Missiles-LaserTechnology
1989
-
International Seminar on Nuclear War - 9th Session: The New Emergencies
1990 - International Seminar on Nuclear War - 10th Session: The New Role of Science 1991 - International Seminar on Nuclear War - 11th Session: Planetary Emergencies 1991 - International Seminar on Nuclear War - 12th Session: Science Confronted with War (unpublished) 1991 - International Seminar on Nuclear War and Planetary Emergencies - 13th Session: Satellite Monitoring of the Global Environment (unpublished) 1992
-
International Seminar on Nuclear War and Planetary Emergencies - 14th Session: Innovative Technologies for Cleaning the Environment
1992
-
International Seminar on Nuclear War and Planetary Emergencies - 15th Session (1st Seminar after Rio): Science and Technology to Save the Earth (unpublished)
1992
-
International Seminar on Nuclear War and Planetary Emergencies - 16th Session (2nd Seminar after Rio): Proliferationof Weapons for Mass Destructionand Cooperation on Defence Systems
1993
-
International Seminar on Planetary Emergencies - 17th Workshop: The Collision of an Asteroid or Comet with the Earth (unpublished)
1993 - International Seminar on Nuclear War and Planetary Emergencies - 18th Session (4th Seminar after Rio): Global Stability Through Disarmament 1994
-
International Seminar on Nuclear War and Planetary Emergencies - 19th Session (5th Seminar after Rio): Science after the Cold War
1995
-
International Seminar on Nuclear War and Planetary Emergencies - 20th Session (6th Seminar after Rio): The Role of Science in the Third Millennium
1996
-
International Seminar on Nuclear War and Planetary Emergencies - 21st Session (7th Seminar after Rio): New Epidemics, Second Cold War, Decommissioning, Terrorism and Proliferation
1997 - International Seminar on Nuclear War and Planetary Emergencies -22nd Session (8th Seminar after Rio): Nuclear Submarine Decontamination, Chemical Stockpiled Weapons, New Epidemics, Cloning of Genes, New Military Threats, Global Planetary Changes, Cosmic Objects & Energy 1998 - International Seminar on Nuclear War and Planetary Emergencies - 23rd Session (9th Seminar after Rio): Medicine & Biotechnologies,Proliferation 8, Weapons of Mass Destruction, Climatology & El Nino, Desertification,Defence Against Cosmic Objects, Water & Pollution, Food, Energy, Limits of Development, The Role of Permanent Monitoring Panels 1999
-
International Seminar on Nuclear War and Planetary Emergencies -24th Session: HIV/AIDS Vaccine Needs, Biotechnology, Neuropathologies, Development Sustainability- Focus Africa, Climate and Weather Predictions, Energy, Water, Weapons of Mass Destruction,The Role of Permanent Monitoring Panels, HIV Think Tank Workshop, Fertility Problems Workshop
2000
-
InternationalSeminar on Nuclear War and Planetary Emergencies - 25th Session: Water - Pollution, Biotechnology- Transgenic Plant Vaccine, Energy, Black Sea Pollution, Aids - Mother-Infant HIV Transmission, Transmissible Spongiform Encephalopathy, Limits of Development - Megacities, Missile Proliferation and Defense, Information Security, Cosmic Objects, Desertification,Carbon Sequestration and Sustainability, Climatic Changes, Global Monitoring of Planet, Mathematics and Democracy, Science and Journalism, Permanent Monitoring Panel Reports, Water for MegacitiesWorkshop, Black Sea Workshop, Transgenic Plants Workshop, Research Resources Workshop, Mother-Infant HIV TransmissionWorkshop, Sequestrationand DesertificationWorkshop, Focus Africa Workshop
2001
-
International Seminar on Nuclear War and Planetary Emergencies -26th Session: AIDS and Infectious Diseases - Medicationor Vaccination for DevelopingCountries; Missile Proliferationand Defense; Tchernobyl - Mathematics and Democracy; Transmissible Spongiform Encephalopathy; Floods and Extreme Weather Events Coastal Zone Problems; Science and Technology for Developing Countries; Water Transboundary Water Conflicts; Climatic Changes -Global Monitoring of the Planet; Information Security; Pollution in the Caspian Sea; Permanent Monitoring Panels Reports; Transmissible Spongiform Encephalopathy Workshop; AIDS and Infectious Diseases Workshop; Pollution Workshop
2002 - International Seminar on Nuclear War and Planetary Emergencies - 27th Session: Society and Structures: Historical Perspectives - Culture and Ideology; National and Regional Geopolitical Issues; Globalization- Economy and Culture; Human Rights - Freedom and Democracy Debate; Confrontations and Countermeasures: Present and Future Confrontations; Psychology of Terrorism; Defensive Countermeasures; Preventive Countermeasures; General Debate; Science and Technology: Emergencies; Pollution, Climate - Greenhouse Effect: Desertification,Water Pollution, Algal Bloom; Brain and Behaviour Diseases; The Cultural Emergency: General Debate and Conclusions; Permanent Monitoring Panel Reports; Information Security Workshop; Kangaroo Mother’s Care Workshop; Brain and Behaviour Diseases Workshop
2003 - International Seminar on Nuclear War and Planetary Emergencies - 29th Session: Society and Structures: Culture and Ideology- Equity Territorial and Economics - Psychology -Tools and Countermeasures -Worldwide Stability - Risk Analysis for Terrorism -The Asymmetric Threat -America’s New “Exceptionalism” - Militant lslamist Groups Motives and Mindsets-Analysing the New Approach The Psychology of Crowds - Cultural Relativism- Economic and Socio-economic Causes and Consequences - The Problems of American Foreign Policy UnderstandingBiological Risk Chemical Threats and Responses - Bioterrorism Nuclear Survivial Criticalities Responding to the Threats - National Security and Scientific Openness - Working Groups Reports and Recommendations
-
-
2004 - International Seminar on Nuclear War and Planetary Emergencies- 30th Session: Anniversary Celebrations: The Pontifical Academy of Sciences 400th -The ‘Ettore Majorana’ Foundation and Centre for Scientific Culture 40th - H.H. John Paul II Apostolate 25th -Climate/Global Warming: The Cosmic Ray Effect; Effectson Species and Biodiversity; Human Effects; Paleoclimate Implications; Evidence for Global Warming - Pollution: Endocrine Disrupting Chemicals; Hazardous Material; Legacy Wastes and Radioactive Waste Management in USA, Europe; Southeast Asia and Japan -The Cultural Planetary Emergency: Role of the Media; Intolerance; Terrorism; Iraqi Perspective; Open Forum Debate - AIDS and Infectious Diseases: Ethics in Medicine; AIDS Vaccine Strategies -Water: Water Conflicts in the Middle East - Energy: Developing Countries; Mitigation of Greenhouse Warming Permanent Monitoring Panels Reports - Workshops: Long-TermStewardship of Hazardous Material; AIDS Vaccine Strategies and Ethics
THE SCIENCE AND CULTURE SERIES
Nuclear Strategy and Peace Technology
"E. Majorana" Centre for Scientific Culture Erice, Italy, 18-26 August 2003
Series editor and Chairman: A. Zichichi
edited by R. Ragaini
r pWorld Scientific NEWJERSEY
*
LONDON
*
SINGAPORE
- SHANGHAI
*
HONGXONG
- TAIPEI - C H E N N A I
Published by World Scientific Publishing Co. Re. Ltd. 5 Toh Tuck Link, Singapore 596224 USA ofice: Suite 202, 1060 Main Street, River Edge, NJ 07661
UK ofice: 57 Shelton Street, Covent Garden, London WC2H 9HE
INTERNATIONAL SEMINAR ON NUCLEAR WAR AND PLANETARY EMERGENCIES 30m SESSION: ANNIVERSARY CELEBRATIONS: THE PONTIFICAL ACADEMY OF SCIENCES 400m -THE 'ETTORE MAJORANA' FOUNDATION AND CENTRE FOR SCIENTIFIC CULTURE 40m-H.H. JOHN PAUL I1 APOSTOLATE 25'"- CLIMATEKLOBAL WARMING: THE COSMIC RAY EFFECT; EFFECTS ON SPECIES AND BIODIVERSITY; HUMAN EFFECTS; PALEOCLIMATE IMPLICATIONS; EVIDENCE FOR GLOBAL WARMING - POLLUTION ENDOCRINE DISRUPTING CHEMICALS; HAZARDOUS MATERIAL; LEGACY WASTES AND RADIOACTIVE WASTE MANAGEMENT IN USA, EUROPE; SOUTHEAST ASIA AND JAPAN-THE CULTURAL PLANETARYEMERGENCY: ROLE OF THE MEDIA; INTOLERANCE; TERRORISM; IRAQI PERSPECTIVE; OPEN FORUM DEBATE -AIDS AND INFECTIOUS DISEASES: ETHICS IN MEDICINE; AIDS VACCINE STRATEGIES - WATER: WATER CONFLICTS IN THE MIDDLE EAST - ENERGY: DEVELOPING COUNTRIES; MITIGATION OF GREENHOUSE WARMING - PERMANENT MONITORING PANELS REPORTS - WORKSHOPS: LONG-TERM STEWARDSHIP OF HAZARDOUS MATERIAL; AIDS VACCINE STRATEGIES AND ETHICS
Copyright 0 2004 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, orparts thereoj may not be reproduced inanyformorby any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 981-238-820-6
Printed in Singapore.
vii
CONTENTS 1.
CELEBRATIONS
Marcel0 Sdnchez Sorondo The Pontifical Academy of Sciences Alan Cook Improving Natural Knowledge: The First Hundred Years of the Royal Society, 16661760
3
15
Guy Ourisson
The Acad6mie des Sciences and French Centralisation
21
Antonino Zickichi The 40th Anniversary of the ‘Ettore Majorana’Foundation and Centre for Scientific Culture
28
Arnold Burgen The Academia Europaea
41
Rocco ButtigIione The 25th Anniversary of the Apostolate of H.H. John Paul I1
2.
CLIMATE: GLOBAL WARMING
Nir J. Shaviv Climate Change and the Cosmic Ray Connection
47
A. Townsend Peterson Climate Change Effects on Species and Biodiversity
59
B.D. Santer and T.M.L. Wigley New Fingerprints of Human Effects on Climate
69
Michael E. Mann Paleoclimate Implications for Recent Human Influence on Climate
86
David Parker and Chris.Folland Evidence for Global Warming
92
viii
3.
ENDOCRINE DISRUPTING CHEMICALS
J.I? Myers, L.J. Guillette, Jx, I? Palanza, S. Parmigiani, S.H. Swan and KS. vom Saal The Emerging Science of Endocrine Disruption
4.
105
POLLUTION: LONG-TERM STEWARDSHIP OF HAZARDOUS MATERIAL
Carlo Giovanardi The Italian Policy for Waste Management and the Co-operation with the International Scientific Community
-
James H. Clarke, Lome G. Everett and Stephen J.Kowall Containment of Legacy Wastes During Stewardship
125
William R. Fmdenburg @resented by James H. Clarke) Public Involvement and Communication in the Long-Term Management of U.S. Nuclear Waste Sites
130
AIIan G. Duncan A European Perspective on StakeholderInvolvement in Nuclear Waste Management
135
Balamurugan Gurusamy Hazardous Waste Management in SoutheastAsia
140
Tomio Kawata Responding to Fermi’s Warning: Japanese Approach to Dealing with Radioactive Waste Problems
150
Stephen J. KowaIl The U.S. Approach to the Science and Technology of Legacy Waste Management
157
5.
THECULTURAL PLANETARY EMERGENCY: FOLEOF THE MEDIA
Michael Stunner Spin in War and Peace
163
ix
6.
THECULTURAL PLANETARY EMERGENCY
Ahmad Kamal Cultural Intolerance
169
Antonio Marzano The Impact of the Planetary Emergencies on Worldwide Productivity and Cooperation with the International Scientific Community
174
Kctor Kremenyuk War on Terrorism: A Search for Focus
180
Hussain Al-Shahristani Iraq After Saddam: An Iraqi Perspective
184
7.
AIDS AND INFECTIOUS DISEASES: ETHICS IN MEDICINE
Diego Buriot Health and Security: SevereAcute Respiratory Syndrome ( S A R S ) : Taking a New Threat Seriously
191
Udo Schiiklenk Professional Responsibilities of Biomedical Scientists in Public Discourse
196
J L . Hutton Ethics, Justice and Statistics
212
Ivan FranCa-Junior Is Access to HIV/AIDS Treatment a Human Right? Lessons Learned from the Brazilian Experience
218
8.
AIDS AND INFECTIOUS DISEASES: AIDS VACCINE STRATEGIES
Jorma Hinkula, Claudia Devito, Bartek Zuber, Franco M. Buonaguro, Reinhold Benthin, Britta Wahren and UUSchroder Systemic and Mucosal Immune Responses Induced by HIV-1 DNA and HIV-Peptide or VLP Booster Immunization
229
X
Rigmor Thorstensson Pre-clinical Primate Vaccine Studies
243
Efwhia Vardas Preparing for Phase 1/11HIV Vaccine Trials in South Africa and Planning for Phase I11 Trials
247
9.
WATER CONFLICTS
Farhang Mehr The Politics of Water
255
Maher Salman and Wael Mualla The Utilization of Water Resources for Agriculture in Syria: Analysis of the Current Situation and Future Challenges
263
Uri Shavit, Ran Holtzman, Michal Segal, Ittai Gavrieli, Efrat Farber and Avner Vengosh The Lower Jordan River
275
Munther J. Haddadin Challenges to Water Management in the Middle East
289
Shad Sorek, K Borisov, A. Yakirevich,A. Melloul and S. Shaath Seawater Intrusion into the Gaza Coastal Aquifer as an Example for Water and Environment Inter-linked Actions
299
10.
THEPLANETARY EMERGENCIES: ITALIAN CIVIL PROTECTION
Guido Bertolaso The Italian Civil Protection Response to Planetary Emergencies and the Co-operation with the International Scientific Community (Kdeo)
11.
-
THECULTURAL PLANETARY EMERGENCY Focus ON TERRORISM: MOTIVATIONS
Ahmad Kamal Report of the Open Forum Debate on Terrorism
3 13
xi
12.
ENERGY
Hishani Khatib Energy in Developing Countries - Is It a Special Case?
317
Bob van der Zwaan Some Perspectives on the Prospects of Nuclear Energy in the Developing World and Asia
326
Norman J; Rosenberg, R. Cesar Izaurralde and I;: Blaine Metting Applications of Biotechnology to Mitigation of Greenhouse Warming 335
13.
PERMANENT MONITORING PANEL MEETINGS AND REPORTS
Mother and Child Permanent Monitoring Panel Nathalie Charpak Panel Report
349
Limits of Development Permanent Monitoring Panel Hiltmar Schubert Panel Report
351
Albert0 Gonzalez-Pozo Urban Mobility in the Mexican Metropolis
359
K.C. Sivaramakrishnan Mobility in Megacities: Indian Scenario
374
World Federation of Scientists Permanent Monitoring Panel on Information Security Henning Wegener Panel Report
383
Henning Wegenel; Elliam A. Barletta, Olivia Bosch, Dmitry Chereshkin,Ahmad Kamal, Andrey Krutskikh, Axel H.R. Lehmann, Timothy L. Thomas, vitali Tsygichko and Jody R. Westby -Paper Toward a Universal Order of Cyberspace: Managing Threats from Cybercrime to Cybenvar
385
xii
Permanent Monitoring Panels on Floods and Unexpected Meteorological Events, Water and Climate Robert Clark Panel Report
43 6
Pollution Permanent Monitoring Panel Richard C. Ragaini Panel Report
43 9
Energy Permanent Monitoring Panel Richard Wilson Panel Report
443
Joseph Chahoud Syria’s Renewable Energy Master Plan: A Message from the Government
456
Andrei Gagarinski Status of Nuclear Energy
47 1
Mark D. Levine Energy Demand Growth in China: The Crucial Role of Energy Efficiency Programs
477
Risk Analysis Permanent Monitoring Panel Terence Taylor Panel Report
488
Andrey A. Piontkovshy The Pillars of International Security: Traditions Challenged
490
vladimir B. Britkov Safety as a Result of Providing Information
495
Reiner K. Huber Anticipatory Defense: Its Fundamental Logic and Implications
498
Desertification Permanent Monitoring Panel Andrew Warren Panel Report
512
xiii
Endocrine Disrupting Chemicals Permanent Monitoring Panel Stefan0 Parmigian i Panel Report
14.
516
LONG-TERM STEWARDSHIP OF HAZARDOUS MATERIAL WORKSHOP
Stephen J. Kowall and Lorne G. Everett Monitoring and Stewardship of Legacy Nuclear and Hazardous Waste Sites
519
Elizabeth K. Hocking Achieving Stewardship and Contributing to a Sustainable Society Through Stakeholder Involvement
522
A.I. Rybalchenko Radioactive Waste of Defense Activities in the 20th Century Handling and Management
526
David K. Smith, Richard B. Knap, Nina D. Rosenberg and Andrew EB. Tompson International Cooperation to Address the Radioactive Legacy in States of the Former Soviet Union
534
Igor S. Zektser Contamination and Vulnerability of Groundwater Resources in Russia
545
15.
AIDS VACCINE STRATEGIES AND ETHICS IN INFECTIOUS DISEASES WORKSHOP Joint Working Group Report of AIDS and Infectious Diseases PMP and Mother and Child Health PMP
Seminar Participants
55 1
557
This page intentionally left blank
1.
CELEBRATIONS
This page intentionally left blank
THE PONTIFICAL ACADEMY OF SCIENCES
MARCEL0 S h C H E Z SORONDO Chancellor Pontificia Academia Scientiarum, Rome, The Vatican See: Dialogo; Specola Vaticana THE NATURE AND GOALS OF THE ACADEMY The Pontifical Academy of Sciences has its origins in the Accademia dei Lincei ('the Academy of Lynxes') which was established in Rome in 1603, under the patronage of Pope Clement VIII, by the learned Roman Prince, Federico Cesi. The leader of this Academy was the famous scientist, Galileo Galilei. It was dissolved after the death of its founder but then recreated by Pope Pius IX in 1847 and given the name 'Accademia Pontificia dei Nuovi Lincei' ('the Pontifical Academy of the New Lynxes'). Pope Pius XI then re-founded the Academy in 1936 and gave it its present name, bestowing upon it statutes which were subsequentlyupdated by Paul VI in 1976 and by John Paul II in 1986. Since 1936 the Pontifical Academy of Sciences has been concerned both with investigating specific scientific subjects belonging to individual disciplines and with the promotion of interdisciplinary co-operation. It has progressively increased the number of its Academicians and the international character of its membership. The Academy is an independent body within the Holy See and enjoys freedom of research. Although its rebirth was the result of an initiative promoted by the Roman Pontiff and it is under the direct protection of the ruling Pope, it organises its own activities in an autonomous way in line with the goals which are set out in its statutes: 'The Pontifical Academy of Sciences has as its goal the promotion of the progress of the mathematical, physical and natural sciences, and the study of related epistemological questions and issues' (Statutes of 1976, art.2, 1). Its deliberations and the studies it engages in, like the membership of its Academicians, are not influenced by factors of a national, political or religious character. For this reason, the Academy is a valuable source of objective scientific information which is made available to the Holy See and to the international scientific community. Today, the work of the Academy covers six main areas: a) fundamental science; b) the science and technology of global questions and issues; c) science in favour of the problems of the Third World; d) the ethics and politics of science; e) bioethics; and fl epistemology. The disciplines involved are sub-divided into nine fields: the disciplines of physics and related disciplines; astronomy; chemistry; the earth and environment sciences; the life sciences (botany, agronomy, zoology, genetics, molecular biology, biochemistry, the neurosciences, surgery); mathematics; the applied sciences; and the philosophy and history of sciences. The new members of the Academy are elected by the body of Academicians and are chosen from men and women of every race and religion on the basis of the high scientific value of their activities and their high moral profile. They are then officially appointed by the Roman Pontiff. The Academy is governed by a President, appointed from its members by the Pope, who is helped by a scientific Council and by the Chancellor. Initially made up of eighty Academicians, of whom seventy were appointed for life, in 1986 John Paul II raised the number of members for life to eighty, side by side with a limited number of Honorary Academicians chosen because they are highly qualified figures, and others who are Academicians because of the
3
4
posts they hold, amongst whom: the Chancellor of the Academy, the Director of the Vatican Observatory, the Prefect of the Vatican Apostolic Library, and the Prefect of the Vatican Secret Archive. In conformity with the goals set out in its statutes, the Pontifical Academy of Sciences 'a) holds plenary sessions of the Academicians; b) organises meetings directed towards the progress of science and the solution of technical-scientific problems which are thought to be especially important for the development of the peoples of the world; c) promotes scientific inquiries and research which can contribute, in the relevant places and organisations, to the investigation of moral, social and spiritual questions; d) organises conferences and celebrations; e) is responsible for the publication of the deliberations of its own meetings, of the results of the scientific research and the studies of Academicians and other scientists' (Statutes of 1976, art. 3, 9 1). To this end, traditional 'study-weeks' are organised and specific 'working-groups' are established. The headquarters of the Academy is the 'Casina Pi0 IV',a small villa built by the famous architect Piero Ligorio in 1561 as the summer residence of the Pope of the time. Surrounded by the lawns, shrubbery and trees of the Vatican Gardens, frescoes, stuccoes, mosaics, and fountains from the sixteenth century can be admired within its precincts. Every two years the Academy awards its 'Pius XI Medal', a prize which was established in 1961 by John XXUI. This medal is given to a young scientist who has distinguished himself or herself at an international level because of his or her scientific achievements. Amongst the publications of the Academy, reference should be made to three series: Scriuta Varia, Documenta, and Commentarii. The most important works, such as for example the papers produced by the study-weeks and the conferences, are published in the Scriuta Varia. In a smaller format, the Documenta series publishes the short texts produced by various activities, as well as the speeches by the Popes or the declarations of the Academicians on subjects of special contemporary relevance. The Commentarii series contains articles, observations and comments of a largely monographic character on specific scientific subjects. The expenses incurred by the activities of the Academy are met by the Holy See. During its various decades of activity, the Academy has had a number of Nobel Prize winners amongst its members, many of whom were appointed Academicians before they received this prestigious international award. Amongst these should be listed Lord Ernest Rutherford (Nobel Prize for Physics, 1908), Guglielmo Marconi (Physics, 1909), Alexis Carrel (Physiology, 1912), Max von Laue (Physics, 1914), Max Planck (Physics, 1918), Niels Bohr (Physics, 1922), Werner Heisenberg (Physics, 1932), Paul Dirac (Physics, 1933), Erwin Schroedinger (Physics, 1933), Sir Alexander Fleming (Physiology, 1945), Chen Ning Yang (Physics, 1957), Rudolf L. Mossbauer (Physics, 1961), Max F. Perutz (Chemistry, 1962), John Eccles (Physiology, 1963), Charles H. Townes (Physics, 1964), Manfred Eigen and George Porter (Chemistry, 1967), Har Gobind Khorana and Marshall W. Nirenberg (Physiology, 1968). Recent Nobel Prize winners who have also been, or are presently, Academicians may also be listed: Christian de Duve (Physiology, 1974), Werner Arber and George E. Palade (Physiology, 1974), David Baltimore (Physiology, 1975), Aage Bohr (Physics, 1975), Abdus Salam (Physics, 1979), Paul Berg (Chemistry, 1980), Kai Siegbahn (Physics, 1981), Sune Bergstrom (Physiology, 1982), Car10 Rubbia (Physics, 1984), Rita Levi-Montalcini (Physiology, 1986), John C. Polan9 (Chemistry, 1986), Jean-Marie Lehn (Chemistry, 1987), Joseph E. Murray (Physiology, 1990), Gary S. Becker (Economics, 1992), Paul J. Crutzen (Chemistry,
5 1995), Claude Cohen-Tannoudji (Physics, 1997) and Ahmed H. Zewail (Chemistry, 1999). Padre Agostino Gemelli (1878-1959), the founder of the Catholic University of the Sacred Heart and President of the Academy after its re-foundation until 1959, and Mons. Georges Lemaftre (1894-1966), one of the fathers of contemporary cosmology who held the office of President from 1960 to 1966, were eminent Academicians of the past. Under the Presidency of the Brazilian biophysicist Carlos Chagas and of his successor Giovanni Battista Marini-Bettblo, the Academy linked its activity of scientific research to the promotion of peace and the progress of the peoples of the world, and dedicated increasing attention to the scientific and health care problems of the Third World. The Presidency of the Academy is presently entrusted to the Italian physicist, Nicola Cabibbo. The goals and the hopes of the Academy, within the context of the dialogue between science and faith, were expressed by Pius XI (1922-1939) in the following way in the Motu Proprio which brought about its re-foundation: 'Amongst the many consolations with which divine Goodness has wished to make happy the years of our Pontificate, I am happy to place that of our having being able to see not a few of those who dedicate themselves to the studies of the sciences mature their attitude and their intellectual approach towards religion. Science, when it is real cognition, is never in contrast with the truth of the Christian faith. Indeed, as is well known to those who study the history of science, it must be recognised on the one hand that the Roman Pontiffs and the Catholic Church have always fostered the research of the learned in the experimental field as well, and on the other hand that such research has opened up the way to the defence of the deposit of supernatural truths entrusted to the Church...We promise again, and it is our strongly-held intention, that the 'Pontifical Academicians', through their work and our Institution, work ever more and ever more effectively for the progress of the sciences. Of them we do not ask anything else, since in this praiseworthy intent and this noble work is that service in favour of the truth that we expect ofthem'(AAS28, 1936, p. 427; Italian translation, OR, 31.10.1936). After more than forty years, John Paul II once again emphasised the role and the goals of the Academy at the time of his first speech to the Academicians which was given on 10 November 1979 to commemorate the centenary of the birth of Albert Einstein: 'the existence of this Pontifical Academy of Sciences, of which in its ancient ancestry Galileo was a member and of which today eminent scientists are members, without any form of ethnic or religious discrimination, is a visible sign, raised amongst the peoples of the world, of the profound harmony that can exist between the truths of science and the truths of faith...The Church of Rome together with all the Churches spread throughout the world, attributes a great importance to the function of the Pontifical Academy of Sciences. The title of 'Pontifical' given to the Academy means, as you know, the interest and the commitment of the Church, in different forms from the ancient patronage, but no less profound and effective in character. As the lamented and distinguished President of the Academy, Monsignor Lemaitre, observed: 'Does the Church need science? But for the Christian nothing that is human is foreign to him. How could the Church have lacked interest in the most noble of the occupations which are most strictly human - the search for truth? Both believing scientists and non-believing scientists are involved in deciphering the palimpsest of nature which has been built in a rather complex way, where the traces of the different stages of the long evolution of the world have been covered over and mixed up. The believer, perhaps, has the advantage of knowing that the puzzle has a solution, that the underlying writing is in the final analysis the work of an intelligent being, and that
6 thus the problem posed by nature has been posed to be solved and that its difficulty is without doubt proportionate to the present or future capacity of humanity. This, perhaps, will not give him new resources for the investigation engaged in. But it will contribute to maintaining him in that healthy optimism without which a sustained effort cannot be engaged in for long' ('Discorso alla Pontificia Accademia delle Scienze, 10.11.1979', inhegnumenti, II, 2 (1979), pp. 1119-1120). It was precisely in that speech that John Paul II formally called on historians, theologians and scientists to examine again in detail the Galileo case. And he asked them to do this 'in the faithful recognition of errors, by whomsoever committed', in order to 'remove the distrust that this case still generates, in the minds of many people, placing obstacles thereby in the way of fruitful concord between science and faith' (ibidem, pp. 1117-1118). A HISTORICAL SURVEY: FROM THE ACCADEMIA DEI LINCEI TO TODAY'S PONTIFICAL ACADEMY OF SCIENCES The historical itinerary of the Academy is summarised in the articles written by Marini-Bettolo (1986) and by Marchesi (1988), and in broader fashion in the monograph by Regis Ladous (1994). As was observed at the beginning of this paper, the roots of the Pontifical Academy of Sciences are to be traced back to the post-Renaissance epoch. Its origins go back to the ancient Accademia dei Lincei, established in 1603 by Prince Federico Cesi (1585-1630) when he had just reached the age of eighteen. Cesi was a botanist and naturalist, the son of the Duke of Acquasparta, and a member of a noble Roman family. Three other young men took part in this initiative: Giovanni Heck, a Dutch physician aged twenty-seven; Francesco Stelluti di Fabriano; and Anastasio de Filiis de Temi. Thus it was that the first Academy dedicated to the sciences came into being, and it took its place at the side of the other Academies - of literature, history, philosophy and art - which had arisen in the humanistic climate of the Renaissance. The example of Cesi, and of the group of scholars led by him, was followed some years later in other countries - the Royal Society was created in London in 1662 and the AcadCmie des Sciences was established in France in 1666. Although he looked back to the model of the Aristotelian-Platonic Academy, his aim was altogether special and innovative. Cesi wanted with his Academicians to create a method of research based upon observation, experiment, and the inductive method. He thus called this Academy 'dei Lincei' because the scientists which adhered to it had to have eyes as sharp as lynxes in order to penetrate the secrets of nature, observing it at both microscopic and macroscopic levels. Seeking to observe the universe in all its dimensions, the 'Lincei' made use of the microscope (tubulus opticus) and the telescope (perspicillus-occhialino)in their scientific research, and extended the horizon of knowledge fiom the extremely small to the extremely large. Federico bestowed his own motto on the 'Lincei' - 'minima cura si maxima vis' ('take care of small things if you want to obtain the greatest results'). The Cesi group was also interested in the new scientific and naturalistic discoveries then coming fiom the New World, as is demonstrated by the most significant works of the college of the first 'Lincei' - the Rerum medicarum thesaurus novae Hispaniae, later known as the Tesoro Messicano, which was printed in Rome in 1628. This was a very extensive collection of new geographical and naturalistic knowledge, and contained in addition accounts of explorations canied out in the
7
Americas. From the outset the Academy had its ups and downs. A few years after its foundation it was strongly obstructed by Cesi's father because he believed that within it activity was being engaged in which was not very transparent in character - for example, studies in alchemy. But after the death of Federico's father, the abundant economic resources which were now obtained thanks to Federico's inheritance, as well as the fact that renowned scholars such as Galileo Galilei, Giovan Battista della Porta, Fabio Colonna, and Cassiano dal Pozzo joined its ranks, enabled the Academy to progress and advance. The religious character of the Academy cannot be overlooked. It was placed under the protection of St. John the Evangelist who was often portrayed in the miniatures of its publications with an eagle and a lynx, both of which were symbols of sight and reason. It was therefore conceived as an assembly of scholars whose goal as one can read in its Rules, described as the 'Linceografo' - was 'knowledge and wisdom of things to be obtained not only through living together with honesty and piety, but with the further goal of communicating them peacefully to men without causing any harm'. Nature was seen not only as a subject of study but also of contemplation. Amongst the suggestions of the 'Linceografo' there is also that of preceding study and work with prayer - 'for this reason the Lynxes, before doing anything at all, must first raise their minds to God, and humbly pray to him and invoke the intercession of the saints' (cf. di Rovasenda and Marini-Bettblo, 1986, p. 18). Amongst the practices of spiritual piety of the members there was the reciting of the liturgical office of the Blessed Virgin Mary and the Davidic Psalter. For this reason, as Enrico di Rovesanda observes, 'the religious inspiration of the Lincei cannot be overlooked, as is done in many quarters, nor can it be reduced to an 'almost mystical glow of the school of Pythagoras', as has also been suggested. The high moral figure of Cesi acts to guarantee the sincere and loyal profession of its religious faith' (ibidem, p. 19). One of the mottoes of the Academy - Sapientiae cupidi - indicated the striving for constant research into truth through scientific speculation, based upon the mathematical and natural sciences but always located within a sapiential horizon. Like Galileo, whose great supporter he was, Cesi admired Aristotle but not the Aristotelians of the University of Padua who had refused to look at things through the telescope of the Pisan scientist. He was in addition rather critical of the university culture of his day. Federico Cesi also engaged in important activity of mediation between the Roman theological world and Galileo, reaching the point of advising the latter to not insist in his polemics about the interpretation of Holy Scripture so that he could dedicate himself in a more effective way to scientific research. Death struck Cesi down in 1630 when Galileo was about to finish his Dialog0 sui Massimi Sistemi, the manuscript of which Galileo wanted to send to Cesi himself so that the latter could organise its publication. After Cesi's death the activities of the Academy diminished to such an extent as to bring about its closure. The first attempts to bring the 'Lincei' back into existence took place in 1745 in Rimini as a result of the efforts of a group of scientists belonging to the circle made up of Giovanni Paolo Siomne Bianchi (known as Janus Plancus), Stefan0 Galli and Giuseppe Garampi. But the new Academy had a very short life. The attempt at refoundation made by Padre Feliciano Scarpellini (1762-1840) in Rome at the beginning of the nineteenth century met with greater success. He gave the name of 'Lincei' to a private academy that he had established in 1795. Despite a lack of funds and a whole series of difficulties, Scarpellini managed to keep the name of 'Lincei' alive and to
8 bring together in a single academic body the various scientists working in the Papal States such as the mathematician Domenico Chelini, the naturalist Carlo Bonaparte, the anatomist Alessandro Flajani, the chemists Domenico Morichini and Pietro Peretti, Prince Baldassarre Odescalchi, the physicists Gioacchino Pessuti and Paolo Volpicelli, and the physician Benedetto Viale (cf. Marini-Bettblo, 1986, p. 10). The authorities of the Papal States took new practical initiatives to re-found the Academy during the first half of the nineteenth century in response to the wishes of Pope Pius VII (1800-1823) and Leo XII (1823-1829), with the allocation of the second floor of Palazzo Senatorio in Capidoglio to the Academy as its headquarters. But in 1847 it was Pius IX who officially renewed the Academy with the name (which had already been suggested by Gregory XVI in 1838) of 'Accademia Pontificia dei Nuovi Lincei' ('the Pontifical Academy of the New Lynxes'), ensuring the drawing up of new statutes which envisaged, amongst other things, the presence of thirty resident members and forty correspondent members. During this period of activity famous astronomers and priests were present within its ranks, such as Francesco de Vico and Angelo Secchi. During the revolutionary upheavals of 1848 the Roman Republic sought to expel the Academy from the Campidoglio. However, the institution managed to keep its headquarters by using various bureaucratic manoeuvres. In 1870, following the fall of the independent Papal States and the unification of the Kingdom of Italy, the Academy divided into two different institutions: the 'Reale Accademia dei Lincei', which later became the present Accademia Nazionale dei Lincei with its headquarters in Palazzo Corsini alla Lungara, and the 'Accademia Pontificia dei Nuovi Lincei', which was transferred from the Capidoglio to the Casina Pi0 IV villa in the Vatican Gardens. One had to wait, as has already been observed, until 28 October 1936 for a further renewal of the institution, which took place in response to the insistent requests of the Jesuit Giuseppe Gianfranceschi. This scientist was Professor of Physics at the Gregorian University and had been the President of the Accademia Pontificia dei Nuovi Lincei since 1921. A new Pontifical Academy of Sciences was thus created by Pope XI by the Motu Proprio In Multis Solaciis (for an Italian translation see Marini-Bettblo, 1987, pp. 199-203. This work has an accurate summary of the life of the Academy for the years 1936-1986). The Presidency was entrusted to the Rector of the Catholic University, Padre Agostino Gemelli, who was seconded by the Chancellor, Pietro Salviucci, and by a Council composed of four Academicians. Annual (and later two-yearly) plenary sessions were proposed for all the Academicians. The accounts of the activities and the contributions of the members were published in the Acta Pontijiciae Academiae Scientiarum and later on in the Commentationes. The first assembly was inaugurated on 1 June 1937 by the then Cardinal Secretary of State, Eugenio Pacelli, the future Pope Pius XII. In discussing this period of the Academy reference should be made to the presence of such distinguished members as Ugo Armaldi, Giuseppe Armellini, Niels Bohr, Lucien Cuenot, Georges Lemaitre, Tullio Levi-Civita, Guglielmo Marconi, Robert Millikan, Umberto Nobile, Max Planck, Ernest Rutherford, Erwin Shrodinger, Francesco Seven, Edmund Whittaker, and Pieter Zeeman. During the years 1937-1946 the publications of the Academy had a largely Italian character, presenting, for example, the work of the Italian Academicians Pistolesi, Crocco, and Nobile on aerodynamics. But there were also papers by foreign Academicians such those as by E. Schrodinger in 1937 on quantum physics and by M. Tibor in 1937-1939 of an astronomical character. During the Second World War the
9
Academy greatly reduced its activity but nonetheless found space for the publications of Jewish Italian scientists who had been marginalised by the race laws of 1938, amongst whom should be mentioned a group of mathematicians of Jewish descent including Tullio Levi-Civita and Vito Volterra, and others such as Giuseppe Levi, Rita Levi-Montalcini, E. FOB and G.S. Coen. Pius XI1 (1939-1958), who succeeded Pius X, did not fail to make addresses to the Academicians, even during the war years, such as the address of 30 November 1941 on the occasion of the inauguration of the fourth academic year. This address was dedicated to a long and profound reflection on the position of man in relation to the Creation and God (cf. Discorsi e Radiomessaggi, III,pp. 271-281). In the post-war period, at a time of sensitive reconstruction and the rebuilding of international relations, in the face of the great difficulties encountered at the level of scientific contacts and exchange, the Academy undertook the publication of the research results of greatest interest of the various fields of science which had been achieved during the war in its work Relationes de Auctis Scientiis ternpore belli (aa. 1939-1945). This publication was of marked importance in fostering the renewal of scientific contacts between the nations that had previously been at war. In 1946 Alexander Fleming (188 1 - 1955) was appointed Academician in recognition of his discovery of penicillin - a discovery that opened the way to the pharmacological production of antibiotics. During the 1950s, in parallel with the problems of reconstruction and the development of under-developed regions, the activity of the Pontifical Academy of Sciences centred around the questions and issues of applied science. In 1955 the study-week on trace elements was held, when for the first time the problem of agrarian production and food sources was addressed. After the election to the papacy of John XXIIl (1958), Padre Gemelli died in 1959. The Presidency of the Academy was then held by G. Lemaitre. The 1960s witnessed an exponential growth and development of science connected with electronics and the conquest of space. This gave new impetus to industry and technological advance but also to nuclear armaments. In astrophysics the discovery of new sensors and the development of radio-astronomy opened up the universe to new interpretations. Biology became directed towards the molecular study of genetics. In 1961 the Pontifical Academy of Sciences organised a study-week on the macromolecules of interest to biology, and in particular on the nucleoproteins, a subject which was then of major importance for international research. On that occasion, when meeting the Academicians, John XXIII reaffirmed the educational and cultural mission of the Church and the function of scientific progress in relation to the positive appreciation of the human person. The Pope recalled in addition that science is directed above all else towards the development and growth of the personality of man and the glorification of God the Creator: 'indeed, far from fearing the most audacious discoveries of men, the Church instead believes that every advance in the possession of the truth involves a development of the human person and constitutes a road towards the first truth, and the glorification of the creative work of God' ('Discorso in occasione del XXV dell'Accademia, 30.10.1961', in Discorsi, Messaggi e Colloqui del Santo Padre Giovanni XXlII, vol. III,p. 493). In 1962, at the time of the plenary session of that year, a study-week dedicated to astronomy which addressed the subject of cosmic radiation in space was held, guided in first person by the President of the Academy, Monsignor Lemaitre. In 1964, at the time of the pontificate of Paul VI (1963-1978), there appeared
10
amongst the publications of the Pontifical Academy of Sciences the Miscellanea Galileiana of Monsignor Pi0 Paschini, who was Professor of History at the Lateran University. The Galileo case was slowly reopened, a development favoured by the reference made to it by Vatican Council II in n. 36 of Gaudium et Spes. This led to the address by John Paul 11 of 1979 to which reference has already been made. After the death of Georges Lemaitre, in 1966 Padre Daniel OConnell was made President of the Academy. A Jesuit and Irish astronomer, he had previously been Director of the Vatican Observatory and had been an Academician for life since 1964. He was also the author, together with other astronomers, of an important general atlas of the stars. The year 1967 was marked by the publication of the encyclical Popularurn Progressio, in which Paul VI brought to worldwide attention all the major problems inherent in the development of the Third World. This document also contained an appeal to engage in international scientific co-operation so that this could in all forms favour developing countries. It introduced the idea that scientific progress and advance must be guided by a 'new humanism': 'every advance of ours, each one of our syntheses reveals something about the design which presides over the universal order of beings, the effort of man and humanity to progress. We are searching for a new humanism, which will allow modem man to refind himself, taking on the higher values of love, fiiendship, prayer and contemplation' (n. 20). In harmony with the themes of the encyclical, the Academy thought it was necessary to open itself to collaboration with the scientists of the Third World and by 1968 it was already holding a study-week on the subject of 'organic matter and soil fertility', a subject which dealt with the application of science to agricultural production and the solution to the problems of hunger in the world. In 1972 for the first time a secular President was elected - the Brazilian Carlos Chagas, who had already been a member of the United Nations and the General Secretary of the first conference of the United Nations on Science and Technologies for Development. The new President imparted a new direction to the activities of the Academy, which were now more centred around solving the great problems of post-industrial society (cf. di Rovesanda, 2000). The scientific activity of the Academy was thus directed not only towards the subjects of science, which were more specific to Westem culture, but also began to be concerned, with the co-operation of Giovanni Battista Marini-Bettblo (who succeeded Chagas in 19SS), with the scientific and health care problems connected with the growth and development of the Third World ('development ethics'). The 1980s witnessed the development of new directions in scientific research which moved in the direction of the life sciences, the earth sciences, and ecology. Mankind had to face up to new problems, such as pollution, changes in the biosphere, energy reserves, and genetic manipulation. In 1982 the Academy committed itself at an international level to the promotion of peace with the drawing up of a document on nuclear armaments (cf. 'Dichiarazione sul disarm0 nucleare' ('Declaration on Nuclear Disarmament'), EV, 7, pp. 1811-1825) and devoted the next plenary session (of 1983) to the subject of 'science for peace'. In connection with that event, John Paul II appealed to members of governments to work in an effective fashion in order to remove the danger of a new war and invited States to engage in nuclear disarmament (cf. 'a sapere scientific0 edifichi la pace, 12.11.1983' ('Scientific Knowledge should Build Peace, 12.11.1983'), in Znsegnamenti, VI, 2 (1983), pp. 1054-1060). This document and appeal achieved a strong resonance in the United States of America and the Soviet Union. During the 1990s meetings and study-weeks were held which were
11
dedicated to analysing the question of the prolonging of life; the question of determining the moment of death; the question of transplants and xenografts; and the question of sustainable growth and development. The issues of artificial fertilisation, cloning, and genetic manipulation were also considered. These were subjects which increasingly involved issues of an ethical character (bioethics) and which drew scientists, philosophers and theologians into dialogue. Although the usual practice of involving various disciplines was maintained, the research and the debates of the Academicians were directed in a special way towards reflection on the anthropological and humanistic dimensions of science. In November 1999 a working-group was held on the subject of 'science for man and man for science', and the Jubilee session of November 2000 was dedicated to the subject 'science and the future of mankind'. THE ROLE OF THE ACADEMY IN THE DIALOGUE BETWEEN SCIENTIFIC THOUGHT AND CHRISTIAN FAITH
In the relations which exist between Academies and the States in which they carry out their activities, the case of the Pontifical Academy of Sciences can be seen as a singular case, as indeed in basic terms the role of the small State which hosts it is also singular. During these long years this relationship has become very fertile. The Church has paid careful attention to the Academy. She has respected its work and fostered the autonomy of its scientific and organisational dynamics. Through the Academy, the Magisterium of the Church has sought to make the scientific world understand her teaching and her orientations in relation to subjects which concern the good of man and society, the complete human development of all the peoples of the world, and the scientific and cultural co-operation which should animate the relations between States. On the occasion of numerous addresses and messages directed towards the Academy by five pontiffs, the Church has been able to re-propose the meaning of the relationship between faith and reason, between science and wisdom, and between love for truth and the search for God. But through the Academy the Church has also been able to understand from nearer to hand, with speed and in depth, the contents and the importance of numerous questions and issues which have been the object of the reflection of the scientific world, whose consequences for society, the environment and the lives of individuals could not but interest her directly, 'given that there is nothing which is genuinely human which does not find echo in her heart' (cf. Gaudium et S p a , 1). The Pontifical Academy of Sciences has thus become one of the favoured forums for the dialogue between the Gospel and scientific culture, gathering together all the stimulating provocations but also the inspiring possibilities that such dialogue brings with it, almost thereby symbolising a shared growth - of both the scientific community and the Magisterium of the Church - of their respective responsibilities towards truth and good. The above survey, although general in character, dealing with the activity carried out over the sixty years since the foundation of the Pontifical Academy of Science, the subjects of the numerous meetings and study-weeks, and the publications which the Academy has produced, brings out all the contemporary relevance and the importance of the subjects which have been addressed. Scientists from all over the world, often co-operating closely with a group of philosophers and theologians, have examined questions and issues which have ranged from genetics to cosmology, from agriculture to the distribution of resources, fkom transplant surgery to the history of science, and from ecology to telecommunications. The speeches addressed by the
12
Pontiffs to the Academicians, from Pius XI to John Paul II, have offered important elements of reflection not only in relation to the ethical and moral responsibility of their activities but also on the very meaning of scientific research, and on its striving for truth and an increasingly profound knowledge of reality. The subject of the relationship between science and faith, both at an epistemological and an anthropological level, has been the usual framework of almost all these papal addresses. The forms of language employed have been different as these decades have passed, and different emphases have been placed on the various questions and issues, but the attention paid to scientific work has been unchanging, as has been the case in relation to the philosophical and cultural dimensions which that work involves. Side by side with such dialogue, which we could call 'ordinary', international public opinion has been witness to certain 'out of the ordinary' events. From the mass media it has learnt about speeches of special importance for the relationship between science and faith, speeches given at the Academy in particular during the pontificate of John Paul II. Of these, reference should be made to the address with which, as has already been observed (see above section I), John Paul 11 spoke to the plenary session of the Pontifical Academy of Sciences in November 1979 to express his wish for, and then formally request, the establishment of a committee of historians, scientists, and theologians which would re-examine the Galileo case and present public opinion with a serene analysis of the facts as they occurred (Galileo, IV).The aim of this was not in a historical sense to recognise the inadvisability of the condemnation of the heliocentrism carried out four centuries beforehand by the Sant'Uffizio (something which had already been effected in 1757 with the removal of the works in question from the list of prohibited books), but rather to ensure that the historical-philosophical context of the episode, as well as its implications at a cultural level, were more illuminated, thereby clarifymg in a public way, which would be comprehensible to everybody, what had already been made clear in a narrower circle of intellectuals and experts. During a new assembly of the Academy, held on 3 1 October 1992, Cardinal Paul Poupard, in the presence of the Holy Father, presented the results of the committee and commented on the work which it had carried out. Four years later, on 22 October 1996, this time in the form of a message on the occasion of the sixtieth anniversary of its re-foundation, John Paul II once again chose the Pontifical Academy of Sciences as a qualified interlocutor to expound certain important reflections on the theory of evolution (Magistero, V.2; Uomo, Identitd Biologica e Culturale, V.3). Returning to and developing certain observations made by his predecessor Pius XII in the encyclical Humuni Generis (cf. DH 3896-3899), he now added that 'new knowledge leads the theory of evolution to no longer be considered as a mere hypothesis', thereby recognising 'that this theory has progressively imposed itself on the attention of researchers following a series of discoveries made in the various disciplines of knowledge', imposing itself also therefore on the attention of theologians and bible experts (Scienze Nuturali, Utilizzo in Teologia). It would not, however, be exact to confine only to recent years the climate of mutual listening and serene encounter on subjects of great relevance. History has also been a witness to other episodes of intense dialogue with the Roman Pontiffs in which the Academy, or some of its members, were the protagonists. This is the case, for example, of Max Planck, who wanted to make himself the interpreter in a direct way with Pius XII in 1943 on the risks of war connected with the use of armaments based upon nuclear fission (cf. Ladous, 1994, p. 144), or the close relationship between Pius
13 XI1 and Georges Lemaitre, who enabled the Pontiff to understand from closer to hand, at the beginning of the 1950s, the meaning of the new cosmological models which were by then beginning to become established in the scientific world, and the philosophical, or even theological, questions which at first sight appeared to be involved (Lemaitre, IV). In more recent years, Carlos Chagas was especially concerned in 1981 to take on the worries of John Paul II, who was still convalescing after the attack on his life, over the consequences for the planet of a possible nuclear war. He decided to himself present the studies carried out on the subject to the principal Heads of State in his capacity as President of the Academy (cf. di Rovesanda, 2000). In the letter sent to Padre George Coyne, the Director of the Vatican Observatory and a member of the Council of the Academy, a document which is certainly one of the most profound there is on the subject of the dialogue between science and faith, John Paul II observed that science has acted to purify faith and that faith has acted to generate scientific research, a truth demonstrated by the fact that modem Galilean science was born in a Christian climate with the increasing assimilation of the message of freedom placed in the heart of man. Thus, in the same letter, referring to the wider context of universities, the Pope declared that: 'The Church and academic institutions, because they represent two institutions which are very different but very important, are mutually involved in the domain of human civilisation and world culture. We carry forward, before God, enormous responsibilities towards the human condition because historically we have had, and we continue to have, a determining influence in the development of ideas and values and the course of human actions' ('Lettera a1 Direttore della Specola Vaticana, 1.6.1988' ('Letter to the Director of the Vatican Observatory, 1.6.1988'), OR 26.10.1988, p. 7.) For this to come about, the Pope stressed the importance of there being experts and places especially dedicated to such a dialogue: 'the Church, for a long time, has recognised the importance of this by founding the Pontifical Academy of Sciences, in which scientists of world-renown regularly meet each other to discuss their research and to communicate, to the wider community, the directions research is taking. But much more is required' (ibidem). And in this 'more' John Paul II saw the need, in their irreplaceable dialogue, for scientific institutions and the Catholic Church not to think in a reductive way about the settling of ancient conflicts, and also saw the more important need for mutual help in the investigation of truth and a shared growth in their responsibility for the good of the peoples of the world and their future. And it was in this logic, with this new readiness to engage in service, that the present President of the Academy, Professor Cabibbo, in his address to John Paul II on the occasion of the Jubilee plenary session on the subject of 'science and the future of mankind' (OR 13-14.11.2000, p. 6) was able to speak about the 'renewed commitment' of the Pontifical Academy of Sciences together with the Holy See to the good of the whole Church, of the scientific community, and of those men and women who search and believe. Pi0 XI, 'Motu proprio De Pontificia Academia Scientiarum, 28.10.1936: in AAS 28 (1936), pp. 421-452; Giovanni Paolo 11, 'Discorso alla Pontificia Accademia delle Scienze in occasione del 1000 anniversario della nascita di A. Einstein, 10.11.1979', in InsegnumentiII, 2 (1979), pp. 1115-1120; 'Discorso inoccasione del 500 della Rifondazione', in Insegnamenti IX, 2 (1986), pp. 1274-1285; 'Discorso in occasione della presentazione dei risultati della Commissione di
14
studio sul caso Galileo, 3 1.10.1992',in Insegnamenti XV, 2 (1992), pp. 456-465; 'Messaggio in occasione del 600 della Rifondazione, 22.10.1996: in EV 15, pp. 1346-1354. BIBLIOGRAPHY For studies and works of a historical character see: E. di Rovasenda and G.B. Marini-Bettblo, Federico Cesi nel quarto centenario della nascita (Pontificiae Academiae Scientiarum Scripta Varia, 63, 1986); G.B. Marini-Bettblo, Historical Aspects of the Pontifical Academy of Sciences (Pontificae Academiae Scientiarum Documenta, 21, 1986); G.B. Marini-Bettolo, L'attivitd della Pontijicia Accademia delle Scienze 1936-1986 (Pontificiae Academiae Scientiarum Scripta Varia, 71, 1987); G. Marchesi, 'La Pontificia Accademia delle Scienze, luogo d'incontro tra ragione e fede', Civiltd Cattolica 139 (1988), 111, pp. 235-246; R. Ladous, Des Nobel au Vatican. La fondation de I'academie pontifcale des sciences (Cerf, Paris, 1994); P. Poupard (ed.), La nuova immagine del mondo. I1 dialog0 fra scienza e fede dopo Galileo (Piemme, Casale Monferrato, 1996); E. di Rovasenda, 'In ricordo dell'antico Presidente della Pontifica Accademia delle Scienze, C. Chagas', OR 21-22.2.2000. Some publications of the Academy on subjects referred to in this paper: P. Paschini (ed.), Miscellanea galileana, 3 vols., (Pontificiae Academiae Scientiarum Scripta Varia, 1964); Science and Technology for Developing Countries (Pontificiae Academiae Scientiarum Scripta Varia, 44, 1979); S.M. Pagan0 and A.G. Luciani, I documenti delprocesso di Galileo Galilei (Pontificiae Academiae Scientiarum Scripta Varia, 53, 1984); The Artifcial Prolongation of Life and the Determination of the Exact Moment of Death (Pontificiae Academiae Scientiarum Scripta Varia, 60, 1985); Discorsi indirizzati dai Sommi PonteJici Pi0 XI, Pi0 XI,Giovanni XXIl, Paolo VI, Giovanni Paolo II alla Pontifcia Accademia delle Scienze dal 1936 a1 1986 (Pontificiae Academiae Scientiarum Scripta Varia, 64, 1986); The Responsibility of Science (Pontificiae Academiae Scientiarum Scripta Varia, 80, 1988); Science for Development in a Solidarity Framework (Pontificiae Academiae Scientiarum Documenta, 25, 1989); The Determination of Brain Death and its Relationship to Human Death (Pontificiae Academiae Scientiarum Scripta Varia, 83, 1989); Science in the Context of Human Culture, I-II (Pontificiae Academiae Scientiarum Scripta Varia, 85-86, 1990-199 1); Resources and Population (Pontificiae Academiae Scientiarum Scripta Varia, 87, 1991); The Legal and Ethical Aspects Related to the Project of the Human Genome (Pontificiae Academiae Scientiarum Scripta Varia, 91, 1993); Discorsi dei Papi alla Pontifcia Accademia delle Scienze (1936-1993) (Pontificia Academia Scientiarum, Vatican City, 1994). For all the publications of the Pontifical Academy of Sciences see Publications of the Pontijical Academy of Sciences (1936-1999) (Vatican City, 1999).
IMPROVING NATURAL KNOWLEDGE: THE FIRST HUNDRED YEARS OF THE ROYAL SOCIETY 1660 - 1760 SIR ALAN COOK Selwyn College, Cambridge, UK ABSTRACT
In the hundred years from the foundation of the Royal Society in 1660 to the general acceptance of Newtonian dynamics and celestial mechanics by 1760, fellows of the Society advanced theoretical and observational astronomy and biological studies. By 1760 the natural sciences were established as autonomous pursuits with their own aims and methods independent of external authority. The astronomical work of Edmond Halley and John Flamsteed and the biological studies of Leeuwenhoek, John Ray, Stephen Hales and others will be considered, as will, in particular Tycho Brahe, Kepler and Prince Federico Cesi. INTRODUCTION The Royal Society of London for Improving Natural Knowledge was founded in 1660 and by 1760 natural philosophy had taken on some of the principal features of modem science. The dynamics and celestial mechanics of Isaac Newton had become generally accepted throughout Europe; in retrospect we see them as the most spectacular feature of the new science. Newton was not alone, many other Fellows of the Royal Society contributed to develop modem science. I consider something of what they did in theoretical and observational astronomy, in biology and in establishing science as independent and autonomous. Modem science did not begin in 1660. In 2003 it is natural to recognise the achievements of the first fellows of the Accademia dei Lincei, of Galileo of course, but of others beside, and especially of the founder, Prince Federico Cesi. There were other precursors of the Royal Society. Fellows of the Society were familiar with the work of Johannes Kepler that led very directly to Newton’s achievements. If modem science began before 1660, it had not fully attained its present form by 1760. Ideas of history in particular were very far from those of today. For all the debts to the past and unresolved issues for the future, the ideas and methods of science did change greatly in the first century of the Royal Society and fellows of the Society did much to bring that about. The Royal Society was a society of independent scholars. The government neither supported nor directed it but we have a very important privilege in our Royal Charter, granted by Charles I1 in 1662. It conferred the right to print and publish despite the monopoly of the Stationers’ Company of London. Fellows of the Society took advantage of it to publish important works. The Philosophiae naturalis principia mathernatica (1687) of Newton is by far the most influential for the development of modem science, but biological works have also proved very important. Together with the journal, the Philosophical Transactions, and the extensive correspondence camed on by the first secretary of the Society, Henry Oldenburg, they made the studies of fellows of the
15
16
Society known throughout Europe. Robert Boyle, many of whose works European scholars knew and possessed, was the fellow most admired in Europe. The public face of the Society was publishing, the private life was no less important in advancing modem science. Small groups of Fellows were wont to move to some coffee house (numerous in London) after the formal weekly meetings and there to engage in perhaps more speculative talk. Independence, important privileges, an open and sociable community, those were important characteristics of the early decades of the Royal Society when its fellows were advancing natural knowledge and laying the foundations of modem science ASTRONOMY In Rome in 1611 Galileo set up his new telescope on the Gianicolo and Prince Cesi and other Linceans saw for the first time the rough Moon, Venus like a crescent Moon, the satellites of Jupiter, and Saturn with his two ears. Galileo did indeed set astronomy on new paths, but different ways were taken in London fifty years later. London astronomers took up the challenges of understanding the results of Kepler and of improving the methods of Tycho Brahe. Kepler’s laws of planetary motion only made sense for a heliocentric solar system. By about 1680 Christopher Wren, Robert Hooke and Edmond Halley had found that both Kepler’s first law, that the orbit of a planet was an ellipse, and his third law, that the square of the period of a planet is proportional to the cube of its distance from the Sun, implied that the force between the planets and the Sun was as the inverse square of the distance. When the three of them took coffee together in January 1684 after a meeting of the Royal Society, they realised they could not show that an orbit under an inverse square law of force had to be an ellipse. More than six months later Halley visited Newton in Cambridge and put the question to him. Newton’s response became the Philosophiae naturalis principia rnathematica. Halley not only set Newton thinking, he edited the manuscript, saw it through the press and paid for it, and published and sold the book. Only Newton could have written Principia, but without Halley it would not have been born.’ We may not use Newton’s geometry today, the observations for which he had to account were limited indeed compared with the present wealth and extent of physical knowledge, but his methods are still ours - construction of an abstract model, the behaviour of which can be worked out mathematically and comparing it numerically with observation. Kepler’s laws were the challenge that stimulated Newton. John Flamsteed was gripped by the need to improve the catalogue of positions of stars that Tycho Brahe had made, and to do so using the new astronomical techniques that were coming to the fore at about the time of the formation of the Society. Adrien Auzout in Pans and Robert Hooke in London were maintaining that telescopic sights could be used to determine the directions of stars better that the open sights that Brahe (and after him Johann Hevelius) had used. Telescopes were especially effective with a micrometer in the eyepiece. At the same time clock makers in England and France were making the pendulum clock into a precision instrument. If a telescope were mounted on a wall so that it rotated in the meridian, the angle at which a star crossed the meridian, as determined with the
17
micrometer telescope, gave the declination of the star, while the time of crossing, measured with a good clock, gave the right ascension. Halley had a micrometer telescope and a good clock when, as a very young man, he spent a year on St Helena and measured the positions of stars not seen from Paris or London. While he could not determine absolute positions, he produced the first catalogue with the new instruments. He became renowned throughout Europe.* Flamsteed meanwhile had been appointed His Majesty’s Astronomer Royal, the only fellow of the Royal Society paid to do science. At first the Society was not directly involved, but after Newton became President in 1704, he was appointed, together with a few other fellows, to be Visitors to oversee the work of the Astronomer Royal. Flamsteed was not pleased. Soon after Halley had come back from St Helena, Flamsteed was able to set up his mural telescope at Greenwich and then for the next forty years devoted himself to his telescopes and producing his Historia Coelestis, containing his observations and the catalogue of fixed stars derived from them. It was a great achievement. Alas, it was obsolete almost as soon as Flamsteed was dead. Halley discovered the proper motion of stars, that they move relative one to another, and James Bradley discovered the aberration of starlight, the composition of the velocity of light with that of the Earth about the Sun. The fixed stars were not fixed as Flamsteed and everyone else had supposed, and future star catalogues would have to allow for that. BIOLOGY The astronomical achievements of fellows of the Royal Society owed more to their predecessors of the north than to Galileo. Biologists among the fellows of the Society clearly took up the same topics as those to which the founders of the Lincei, and especially Prince Cesi himself, had devoted themselves. In 1624 Galileo sent Cesi a microscope, of what construction is not known. With it Cesi made many studies of the anatomy of plants, especially of fungi and seeds.3 Not until the last two decades or so have the scope and originality of his work become known. John Ray however knew about it and referred to it in his Historia Plantarum, though in how much detail is not clear. He learnt of it from a publication of the Lincei that appeared only in 1651, twenty years after Cesi had died. The book was the Mexican Thesaurus, an edited version of the account of Mexican animals and plants produced by the Spanish explorer Francisco Hernandez, to which Fabio Colonna added a note on Cesi’s microscopic studies that Ray quoted almost word for word. Ray also knew of the scheme for the classification of plants that Cesi had d e ~ i s e d .Anatomical ~ studies with the microscope, the first serious attempt at a classification based on the characters of the plants and not on such matters as medical uses, to those were added the first studies of fossils. Cesi’s estates at Acquasparta had large deposits of fossil wood. Cesi identified them as remains of once living trees and studied them microscopically. The Royal Society might be said to be the centre of microscopical anatomy, of plants and animals both, at the end of the seventeenth century. Three notable books were published with the imprimatur of the Society. The Micrographia of Robert Hooke is probably the one the best known today. It is a collection of studies of a wide variety of objects, animal, vegetable and mineral. Nehemiah Grew in The Anatomy of Plants described the structures that he had found in plants by systematic microscopical
18
investigations. Marcello Malpighi of Bologna collected his microscopical studies of plants and chicken eggs in his Anatome Vegetale. Antoni van Leeuwenhoek, in the Netherlands, sent his observations to the Society, where some of his specimens still survive among his correspondence. Even John Locke, staying in Montpellier for his health, used a primitive single-lens microscope to study the eggs of insects from which a red dye was made.5 Taxonomy is the study of how living creatures are related in species, genera and so on. John Ray introduced new criteria for classification in the Historia Piscium, an account of fishes that he completed after the death of Frances Willughby.6 The many excellent illustrations made it very expensive and put the Society, which published it, in financial difficulties. Ray broke away, from old schemes based on supposedly authoritative accounts from long ago and instead used external morphology to determine relations between types of fish. That may seem primitive compared with using DNA or breeding habits to determine genetic relations, but it remains the only means by which many fossils can be classified. Later Linnaeus (who knew something of the microscopic work of Cesi) developed his well-known binomial scheme. There seems to be a clear sequence from the classification of Cesi, to that of Ray and then on to Linnaeus. Francisco Stelluti, who published Cesi’s studies of fossils after Cesi’s death, misunderstood them and suppressed or weakened Cesi’s conclusions that the fossils were remains of once-living organisms. Sir George Ent had received samples of Cesi’s wood from Cassiano dal Pozzo and they were shown at meetings of the Royal Society. John Evelyn (in Sylva) and Robert Hooke (in Micrographia) both concluded that they had once been pieces of trees. Fellows of the Royal Society were most certainly interested in fossils, especially objects such as shells that seemed to be like the shells of living creatures, and yet were not the same. John Ray, crossing the Alps from Switzerland into Italy in 1664, saw many in the alpine rocks and considered at some length various ideas about how they came to be, such as God put them there to mystify us. He concluded that they were indeed the remains of creatures that had once lived in a sea where the Alps now were. Robert Hooke in Micrographia, and Edmond Halley, who saw shells in rocks near Harwich on the east coast of England, thought the same.7 By the time of the foundation of the Royal Society, Cesi had been dead for thirty years and it is not easy to assess how well his works were known in England. Many of the drawings that recorded his microscopical studies passed into the collections of Cassiano dal Pozzo. Evelyn and Ray may have learnt of Cesi’s work when they saw dal Pozzo’s collection in Rome in 1644 and 1664 respectively.8 Sir George Ent also corresponded with Cassiano. There are copies of the Mexican Thesaurus in the Royal Society, and the British Library, but not in the University Library, Cambridge. Ray evidently knew it when he came to write the Historia Plantarum, but did he himself have a copy? As for Hooke, Grew and Malpighi, none of them mentioned Cesi. In the early decades of the eighteenth century, Stephen Hales FRS, who died in 1761, introduced new mechanical Newtonian ideas into the study of living creatures with his investigations into blood pressure and flow, published in Haemostasis, and into the rise of sap in plants, published in Vegetable Statics.’ Before him studies of living creatures were essentially anatomical, of structures, he asked how they worked mechanically.
19
WHAT IS MODERN SCIENCE?
In what ways can the astronomy and biology of the first century of the Royal Society be said to be ‘modem’? In one important respect our predecessors were not modem. At the end of his life Newton had prepared a general chronology, including biblical chronology, for the personal use of Queen Caroline. After his death it came to be published and drew criticism from Etienne Souciet, a French abbe, who considered that Newton had misdated the Siege of Troy and the Voyage of the Argonauts. Halley came to Newton’s defence in two papers in Philosophical Transactions, arguing that Newton had correctly interpreted the astronomical evidence on which he had based those supposed dates. The distinction we make between myth and history was not yet clear.” What is modem science and what do we mean by it? A few characteristics that developed in the first century of the Royal Society do at least seem essential to modem science. Newly developed instruments, telescopes and microscopes, were used to look further into the heavens and probe deeper into the structures of objects living and inanimate. Clocks became increasingly reliable and precise so that time, once a psychological sensation, became a definite measurable physical variable. Mathematics was used to summarise observations, in the form of statistics, as summary formulae and in incipient set theory. Above all Newton showed how to set up an abstract analogue or model of a natural system, how to work out its behaviour and how to compare that behaviour numerically with observation. Newton and his contemporaries may not have made a clear distinction between history and myth, but they did realise that the world and its contents had not been made once and for all in the relatively recent past. Thus Halley estimated the ages of lakes and seas from their salinity and the rate at which rivers brought in salt. He also indicated that the Moon was speeding up in her orbit about the Earth, and later found the proper motion of stars.” The World was changing. Fossils too, showed that both the rocks and their contents had not always been as they are now. Those were disturbing ideas for traditional preconceptions; they aroused hostility in the 1660’s but were tacitly accepted by many in the 1760’s. New and improved instruments, new discoveries, new ideas, soon became widely known through printed publications. Prince Cesi himself had set that out as a principal aim of his new academy and the Royal Society actively followed his precepts, whether the fellows knew of them or not. The new discoveries would hardly have been made and if made, probably not published, had the Royal Society accepted the authority of the past, or of divines, or of philosophical schemes. The motto of the Royal Society is ‘Nullius in Verba’. Its exact meaning is debatable, but it might be paraphrased as ‘Do not believe text books’. Again, Cesi affords a precedent. In 1615 in a lecture in Naples, Del natural desiderio di supere, he asserted the human duty to study the natural world for itself and to make known the results. He strongly criticised all institutions (universities as well as ecclesiastical authorities and followers of Aristotle) that claimed authority over natural knowledge.12In the course of the first century of the Royal Society, its Fellows helped to establish the idea of science as an autonomous enterprise with its procedures determined by the phenomena being investigated, setting its own aims and methods, and independent of extraneous authority, whether historical, philosophical or religious.
20
NOTES See Cook, Alan, Edmond Halley, Charting the heavens and the seas (Oxford, 1998),pp.147, 148. 2. See Cook, loc.cit., (note 1) Ch.3. For Cesi as botanist, see Pignatti, S and Mazzolin, G.: Federico Cesi Botanico, pp. 3. 212- 223, in Convegno celebrativo del IV centenario della nascita di FedericoCesi, (Acquasparta, 7-9 ottobre 1985) Atti dei Convegni Lincei 78, Roma, Academia dei Lincei, 1986. Ray, John, Historia Plantarurn (London, 1674), p.132; Hernandez, Francisco et al. 4. - Rerum Medicarum Novae Hispaniae Zkesaurus seu plantarium, animalium, mineralium MexicanoramHistoria. (Roma, 1651). 5. Hooke, Robert, Micrographia (London 1665); Grew, Nehemiah, Zke Anatomy of Vegetables (London 1672); Malpighi. Marcello, Anatome Plantarum (London 1675); for Locke see Lough, John,1953, John Lockes travels in France, 1675-9. (Cambridge: Cambridge University Press) Ray, John Historia Piscium (London, 6. Evelyn, John, Sylva (London, 1664) pp.95-7. Hooke, Robert, Micrgraphia 7. (London, 1665), PP.105-7, 110-112. Ray, John 1673, Observations topographical physiological and moral (London), 499 pp. + Catalogus Stirpium ; for Halley’s comments, see Journal Book Copy, Royal Society of London, for 1 August 1688. Haskell, F. and McBurney, Henrietta, The Paper Museum of Cassiano dal Pozzo, 8. Visual Resources, 14, (1998), 1-17. The Diuiy of John Evelyn ed. J S de Beer. (Oxford, 1955) Vol 2, p.277. For Ray’s visit to the Cassiano collectiion, see Skippon, Philip, An account of a journey through part of the Low Countries, Germany, Italy and France, in A and J Churchill, 1732, A Collection of Voyages and Travels (London) pp. 361-736 and Index Hales, Stephen, Vegetable Statics (London, 1727); Haemeostaticks, vo1,II of 9. Statical Essays (London, 1733) 10. For Newton’s chronology, see Cook, Halley, loc.cit., (note 1) p. 400 11. See Cook, Halley, loc.cit., (note l), pp. 225-8,346,348-9. 12. For Cesi’s lecture, see Montalenti, Giuseppe, Introduzione a1 Convegno, in Convegno celebrativo del IV centenario della nascita di Federico Cesi, (Acquasparta, 7-9 ottobre 1985, 1oc.cit. (note 3)s. 1.
THE ACADEMIE DES SCIENCES AND FRENCH CENTRALISATION GUY OURISSON AcadCmie des Sciences, Strasburg, France INTRODUCTORY REMARKS
I hope the reader will accept a few highly personal introductory remarks, to explain why I am sensitive to the extent to which France remains defacto deeply, instinctively, Parisocentric, even in an age of global science. I was appointed to the Faculty of Sciences of Strasbourg in 1955 by the General Director of Higher Education in the Ministry. The University of Strasbourg was not consulted. He asked me to promise that I would stay there "at leastfive years", and signed the appointment letter. Fifty years later, after I have, in hundreds of scientific and administrative meetings, forcefully spoken in the name of the non-Parisians, I still meet colleagues or administrators who simply cannot believe that I still do not even have a small flat in Paris. I have been a member of the AcadCmie since 1981. My appointment was signed by the President of the Republic; it carries a pension, something that the French (among others) like very much (although its annual amount is about the monthly minimum wage, but it is drawn directly from the TrCsor Public, from the State's kitty... in earlier times it would have been drawn directly from the King's purse). In 1997, I was elected Vice-president of the Acadtmie, i.e. defacto Presidentelect, and thus in 1999-2001, for the first time ever, a "non-resident" (read: a nonParisian) occupied this exalted position since the foundation of the AcadCmie in 1666. I did in fact reside, Mondays through Thursdays, in a small studio on the premises; however, my successor being of course a Parisian, this studio was converted promptly into offices: the prospect that another "provincial" could be elected as President in the near future is negligible. The AcadCmie des Sciences is still very much a Parisian institution open to nonParisian members. WARNING I am a chemist, not a historian. I therefore feel unable to provide the eminent participants of this meeting with original and learned views, based on my own study of our Archives and revealing original documents. I have consulted our excellent Director of the Archives, Mme Florence Greffe, who gave me the titles of the books I should absolutely read if I wanted to sound informed.' This I did, and my text is based on these readings, as well as on my personal experience. This essay has therefore no historical value, but hopefully may constitute a testimony. GENERAL The word "Academie" (or "AcadCmie") leads to 246,000 Web sites through Google. Many are French sites, and lead to the 5 Academies of the Institut de France, but also to the French National AcadCmies (de MCdecine, de Pharmacie, de Technologies, etc., not part of the Institut de France), to the many regional Academies, often very poor institutions with a very rich and long history, to the US or
21
22
Dutch Academies." The only ones we consider in this Symposium are those we regard to-day as the "National" ones: the Accademia dei Lincei, the Royal Society, the Prussian (now Berlin-Brandenburg) Academy, the National Academies US, and many others, now part of the international "families of Academies" : IAP, IAC, ALLEA.. . and of course the international ones like the Pontifical Academy or Academia Europzea. WHAT DO THESE ACADEMIES DO? One of us used to say, of course only for the French Academie : Theymeet, They vote to change their Statutes, They count their dead,"' They vote to elect new members, They vote to select laureates for their prizes, They meet, they vote, they count.. . .. . This is of course true, but largely incomplete : They also publish, They organise meetings, They prepare reports, They take part in international relations, They foster the creation of useful agencies. Thev mblish: A traditional activity of the Academie has been to publish scientific literature. With their British counterparts the Transactions, the Comptes-Rendus are one of the oldest continuously published scientific journals in the world. Since their heyday in the late XKth Century, they have progressively declined in importance as many more media became available, but they are regaining strength after what appeared to be their demise. They now publish about 10,000 pages of scientific original documents each year; we are however not happy at all with their poor performance as regards citation data. We should not forget that these journals in fact invented the modem publication procedure, where the author submits his publication proposal to one or two referees who advise for or against publication; in modem journals, refereeing is usually done under the cloak of anonymity, whereas in Comptes Rendus, tradition was that you should bring your manuscript in person to one of the members, who would read it and accept it, ask for changes, or reject it. I had to bring my first publication to the very old and powerful Marcel Delepine on a Monday, in his apartment; he asked me courteously to explain it briefly, and it was published the following Monday. I am not sure it deserved such rapid publication. Anyway, now, the process is less personal, referees are not only the Members of the AcadCmie and remain anonymous, but.. . publication takes much longer, even with electronic gadgetry. The AcadCmie also publishes reports, which are examined below. Thev meet and organise meetings: For many years, the meetings of the Academie were used to show Science. Some classical examples are : The presentation of "novel" fossils by Cuvier;
23 The demonstration of the effect of an electric current on a magnetic needle by Arago, following his description to the Academie, the preceding week, of (Ersted's results; The presentation by Pasteur of his sterile vials which he had opened in the clean air of the Mer de Glace; The presentation by Becquerel of his photographic plates which had been darkened not by light but by the "rays" emitted by uranium salts, etc, etc.; The presentation by Friedel of the discovery of radioactivity by Marie and Pierre Curie. Examples abound; but this is all past, and that fashion is apparently gone. In more than 20 years of active participation, I have never seen any experimental demonstration nor any presentation similar to the classical ones mentioned above. However, from the lectures, long or short, presented during our meetings, I have often gathered up-to-date information on the most diverse problems. From the way light passes through holes smaller than its wavelength, to the depth at which Antarctic penguins dive to catch fish; from new materials for car catalytic converters, to the disposal of fission products in a nuclear economy; from the concatenation of molecules into molecular knots or trefoils, to some news about prime numbers, etc. Reading general scientific journals, like Nature or Science, or specialised ones, is of course another way to achieve the same results, but one should not downplay the usefulness of the Academie from that point of view. It is true however that the benefit is restricted to the attendees, that the august Meeting Room is as ill-adapted as possible to presentations to the public, and that most of us have not learnt the delicate art of speaking to the public at large, journalists or TV people, or even to colleagues of other denominations. Another mode of transmission of scientific information is, in my view, much more efficient. It is linked with what has become in recent years a major contribution of the AcadCmie: the organisation of scientific meetings. Some of these are organised in the traditional way, with a roster of invited speakers and a programme leaving some time for discussions; the next one will be on "Water", and its wide-ranging programme will probably attract 200-300 participants. Some of these meetings are organised jointly with another of the National Academies, for instance, with the Academie de MCdecine on medical themes, others with a foreign Academy"'. The original advantage of the AcadCmie in organising these meetings is that it can cut across fields and disciplines more easily than the specialised scientific societies. It can also organise bi-national meetings, and has done so with the Academies of Spain, of the US, of Canada, of Russia and of China. In my personal experience, another useful activity of our AcadCmie has been to take advantage of its solemnity and undisputed seriousness to experiment with other types of meetings. I have personally invested much time in experimenting with a small number of young participants, strictly selected, and secluded in an isolated place for the duration of the meeting : The "Scientia Europa?aB"" meetings brought together 50 young" European scientists for five days, chosen from among approx. 350 nominees proposed as exceptionally good by the various European Academies (+ MPG, + CNRS, etc). "Exceptionally good" people should not require a programme; the posters they presented and discussed defined the programme. Seven such meetings have lured 350 participants from 35 European countries. Unfortunately, this highly successhl experiment
24
requires generous sponsors, and the ones we had found did not fare well on the Stock Market.. . Meetings on multidisciplinary topics, with a group of 20 young European participants, again good enough to take the shock of mixing hard scientists with archaeologists, or even with artists; there again, we met success on themes like "Physics of the isolated molecule" or "Surfaces", perhaps mostly because the good image of the Acadtmie could open the doors of a very beautiful domain in the South Alps for us, in a very generous Foundation. Meetings on organisation problems: a meeting co-organised with the German Leopoldina and the Junge Akademie dealt with a comparison of the problems encountered by young scientists in France and Germany at the beginning oftheir careers. 20 participants: 10 each from each country, 10 administrators of programmes intended to help beginning independent scientists (CNRS, MR, Volkswagen Stiftung, A. von Humboldt Stiftung, MPG.. .), 10 beneficiaries of these programmes. We shall continue. They prepare reports: In the early years of the AcadCmie, the King and his ministers considered it quite normal, as they were spending money on the AcadCmie and its Members, to ask them to contribute to the welfare of the Kingdom. A classic example of this activity has been the search, in France as in England with the Royal Society, of ways to measure longitude at sea: claims of ownership of newly discovered islands depended directly on this mastery, and this depended in turn on the development of reliable and sturdy chronometers. The presence, on the premises of the AcadCmie, of the "Bureau des Longitudes", is testimony to this activity. However, it is rather interesting that there were periods in the 19" century when a request for Reports was considered as an infringement on the real, exalted goals of the AcadCmie, and was quite efficiently resisted until the practice disappeared. Thus, it was a secular revival when, some 30 years ago, our Permanent Secretary Paul Germain obtained from the President of the Republic, Giscard d'Estaing, the request to prepare a report on "The state of mechanics in France". This led to intense work, and to the publication of the first "modem" report of the AcadCmie. In 1998 again, the President of the Republic, Jacques Chirac, asked the AcadCmie, through its then President Jacques-Louis Lions, to produce a series of reports on some general, worldwide problems. The preparation and publication of these reports demonstrated the capacity of the AcadCmie not only to mobilise its Members, but also to enlist the help of outside scientists - and not only, as some have malignantly suggested, of prospective candidates for election. Emboldened by the success of this endeavour, the AcadCmie suggested and accepted from an Interministerial Committee the charge of preparing Reports on Science and Technology each year, against a reasonable subsidy, which now constitute a major series of highly interesting publications, a sort of encyclopzediaby instalments."" They take part in international relations: Leibniz and Newton were "Membres" of the AcadCmie at its foundation. Note : not "Membres Ctrangers". Science is global ;like the other Academies, the AcadCmie tries to maintain and to enliven a vast network of inter-academic exchanges :
25
It now contains in its membership approximately 140 foreign members; it is true that, for most of them, it is only an honour, but some do take part actively in some of the activities of the AcadCmie; The AcadCmie entertains inter-academic exchanges of ”name-lectures’’ with the Royal Society, the Netherlands Academy, the Lincei; It plays an active role in the international organisations of Academies : the InterAcademy Panel ( U P ) , the InterAcademy Council (IAC), the InterAcademy Medical Panel (IAMP), the All-European Academies (ALLEA), the European Academies Science Advisory Council (EASAC), the ConfCrences Amaldi, and it entertains close relations with ICSU, with the UE, with developing countries (via TWAS), and with the Scientific Unions. It is a rather recent development that the AcadCmie has strengthened its links with the French Foreign Office, in different ways: in selecting scientists to take part in international meetings; in helping some Scientific Councillors of our Embassies to evaluate scientific proposals of subsidies they receive (for meetings, for post-doctoral fellowships, etc.); in using visits of its Members to conference sites to organise informal meetings of their retinue of friends and colleagues with the Scientific Councillors, and thus to enlarge their basis of operations. It is true that the international alibi is not enough to make a useless action a good one. But it is also true that, in the scientific field, an action limited to its national dimensions is most often useless. They foster the creation of useful agencies: Like Queen Victoria (in a probably apocryphal declaration), the AcadCmie “may have little power, but is not devoid of influence”. It has used its influence to obtain the creation of informal organisations that later developed into very important agencies. LA MAIN A LA PATE: This is a programme resulting from the visionary and missionary work of Georges Charpak, Yves QuCrC and Pierre LCna, to innovate the teaching of experimental sciences in primary schools. It started as an experiment, wisely restricted to a few groups of children, and is progressively invading the whole tissue of primary education in France. This imaginative programme remains under the operational control of the Academie, but is being developed progressively, in France and in several foreign countries, notably in China, with the necessary modifications. The Ministry of Education has so far wisely refrained from appropriating this programme, which, despite its success, is still considered by its staunchest proponents as experimental - to avoid the freeze that would result from its transformation into a nation-wide official curriculum. But in a longer perspective, the destiny of La Main ci la Pite is not to be a permanent part of the AcadCmie, which has simply nurtured it, and has probably made it viable by protecting it in its infancy. The same is true of the Fondation nationale Arfred Kastler, which was set up as a service of the AcadCmie to try and improve the way foreign researchers, young or already established, were received in France: to inform them of the formalities before their arrival, to work with our foreign visitors to evaluate what may have been negative during their stay, with the research organisations and ministries to improve these sore points, to try and set up competent offices in the various Universities and
26 research centres to handle local problems, to exchange information between these centres, etc. The "FnAK" is now about ten years old; it has been remarkably successful in its endeavours, and had grown progressively into the largest service of the AcadCmie, budget- and personnel-wise. This very success has led to technical difficulties, and to the decision to find another administrative structure for the FnAK, which is now housed by the CitC Internationale Universitaire de Paris, as an autonomous component of this large organisation. In this case also, the destiny of the FnAK had not been to remain a permanent part of the AcadCmie ; however, it is quite certain that, without the protection of the Academie during the infancy of FnAK, this would not have been taken seriously by the various Ministries and agencies which have accepted to launch it and have ensured its growth. A further word about centralisation. The Academie has recognised, about the time it was getting ready to elect me as its Vice-president, that it now comprised a fair number of non-Parisian members. Our Permanent Secretary, Jean Dercourt, set up a series of meetings outside Paris. In the major Centres, the local Members are asked to set up a programme of two days, organised often in cooperation with the regional AcadCmie. Often, Members "exiled" from these Centers to Paris are invited to participate ; in some cases, their "return" is a major event for the local University and for the local newspapers. It is too early to really gauge the efficacy of these Meetings, but the effort must be pursued. Academies have to invent new ways of remaining useful, if they do not want to become dead bodies: the pride of their members in their membership is not in itself a justification for their perpetuation!
' Some sources : Roger Hahn - The Anatomy of a Scientific Institution - The Paris Academy of Sciences, 16761803, University of California Press, Los Angeles, London, 1971. Maurice Crosland - Science under Control - The French Academy of Sciences 1795-1914, Cambridge University Press, Cambridge, 1992. Elisabeth Badinter - Les Passions intellectuelles, 2 vol. , Fayard, Pans, 1999. Academie des Sciences - Rhglement, usages et science dans la France de I'absolutisme (Proceedings of a Colloquium - 1999), Lavoisier, Paris, 2002. li Also to the French regional administrative divisions of the Ministry of Education called AcadBmies, to Dance or Soccer Academies, to Billiard Academies, etc.
li' The total Membership of the Academie was defined, not the annual flux of new Members as in some other Academies. Counting the dead used therefore to be an important occupation. Now, it is the total number of Members under 80 years of age.
List of Symposia organised on the premises of the Acaddmie in 2002 and 2003: "Transferts de matiere a la surface de la terre" "Effet de sene : impacts et solutions. Quelle credihilite ?" "Cellules souches et therapie cellulaire" "L'irnmunite innee - De la drosophile a l'homme" "Chimie et nanosciences" "Les risques nucleaires" "Les risques chimiques" "La skurite sur la toile Internet" and two of the most recent inter-academy ones : "Chemistry and Mathematics : two scient$c languages ofthe 21" Century", organised by the Leopoldina with the participation of the Gattingen Academy and Academie des Sciences iv
27
"L'immunitd inn& - De la drosophile ci I'homme", organised with the Academy of Medical Sciences (UK)
" Registered trade-mark of the Academie. "I "Youth"should be defmed. I usually propose saying that "young" means no more than 5 years oldet than myself. However, in the case of the Scientia E u r o p m meetings, the limit was fixed at 40 years old.
vii
. ..
. . . .. . a
List of the Reports published by the AcadCmie. Etudes sur l'environnement - De l'echelle du territoire a celle du continent, De la transgenese animale a la biotherapie chez l'homme Les plantes genetiquement modifiees Rapport biennal sur la science et la technologie en France - Synthese 1998-2000 Systematique - Ordonner la diversite du Vivant Le monde vegetal - Du genome a la plante entiere Sciences aux temps ultracourts - De l'attoseconde aux petawatts La statistique Systemes rnoleculaires organises - Carrefour de disciplines I'origine de developpements industriels considerables La chimie analytique - Mesure et socikte Materiaux du nucleaire Radiochimie - Matiere radioactive et rayonnements ionisants Le medicament / Medicinal drugs Physiologie animale et humaine - Vers une physiologie integrative / Animal and human physiology - a step towards integrative physiology Developpement et applications de la genomique - L'aprks-genome Les neurosciences fonctionnelles et cognitives : recherches sur la physiologie et les pathologies du systeme nerveux Exploitation et surexploitation des ressources marines vivantes Stratospheric ozone Contamination des sols par les elements en traces: les risques et leur gestion / Soil contamination by rare elements: risk management L'ozone stratospherique Impact de la flotte aerienne sur I'environnement atrnospherique et le c h a t / The impact of aircraft fleets on the atmospheric environment and on the climate Aspects moleculaires, cellulaires et physiologiques des effets du cannabis / Molecular, cellular and physiological considerations appertaining to the effects of cannabis Problems associated with effects of low doses of ionising radiations Pollution atmospherique due aux transports et sante publique / Atmospheric pollution caused by transportaion and its effects on public health L'avenir de la recherche universitaire - Le devenir des docteurs des universites frangaises Valorisations non alimentaires et non Bnergetiques des produits agricoles / Development of tgricultural products for purposes other than food or energy Etat de la recherche toxicologique en France La recherche scientifique et technique dans le domaine de 1'6nergie Perspectives Bducatives des formations techniques et professionnelles. L'appareil &information sur la science et la technique Quelle place pour la m6trologie en France a l'aube du 216 Siecle ?
THE 40th ANNIVERSARY OF THE ‘ETTORE MAJORANA’ FOUNDATION AND CENTER FOR SCIENTIFIC CULTURE ANTONINO ZICHICHI CERN, Geneva, Switzerland and University of Bologna, Italy Over the course of the last 40 years, eighty-six thousand scientists from one hundred and forty nations have taken part in post-university activities that have rallied around the banner of a Science without secrets and without frontiers. This scientific community has striven to break down ideological, political and racial barriers that were invented not by Science, but by its worst enemies. The very existence of a scientific community as vast as that of Erice serves as concrete evidence that the new role of Science has already become a reality. To conduct Science means to discover the Fundamental Laws of Nature. The applications of great scientific discoveries almost always slip out of the control of Science itself. This is why technological development almost always contradicts the values instilled by Science: love for Creation and respect for life and human dignity. “Science and Faith are both gifts of God”, said John Paul 11. No Pope has ever before had the courage to put Science and Faith on pedestals of equivalent dignity, and it is out of this truth that the new role of Science is born. And it is this same truth that fathered the “Spirit of Erice”, known in the international scientific community as “The Erice Geist” (a mix of Italian, English and German). It was this “Erice Geist” that brought the greatest Defense intellects from the USA and the USSR to share the same table at Erice’s Seminars on Nuclear War. Thus it seems worthwhile to ask ourselves what happened at Erice 40 years ago. To understand what happened at Erice from 1963 on, it is first necessary to jump back nine hundred years in time. When the first University was founded nine hundred years ago in Bologna, the impetus came from a source that has still not run dry, even today. Quite the contrary. To learn the origins of the latest inventions and discoveries straight from the mouths of the inventors and discoverers themselves ... It was this possibility that pushed a group of well-educated men to establish the first University. In that era, the medical and juridical sciences were the centre of attention. To learn of the latest findings, one had to wait for the books to be drafted. The time necessary: ten years. But now books can be printed in a week. Why then should we insist on seeking the “living voice” of those who discover and invent?
28
29
Victor F. Weisskopf with Antonino Zichichi at CERN, in garden (1960).
30 While the time required to publish a volume has been reduced, our body of knowledge has expanded at an overwhelming pace. What we have learned from Galilei's time up to today is greater than everything that happened during the ten thousand years that separates us from the dawn of civilisation. And what has become of that institution called the University? Slowly and quietly it had to absorb the enormous growth of knowledge. In all fields. So, from the propulsive centre of new knowledge, it became a place of propaedeutic formation. Bringing young scholars to the threshold of new cognition, enabling them to understand what is being done in the most advanced sectors of the different disciplines through which human knowledge is articulated: that is the work of today's University. And it is not trivial work. How should we present the latest inventions and discoveries? By talking about who did them? How they were done? How the conclusions were reached? When a student opens a university text, he almost always gets the impression that the topic of study is a closed chapter. He will rarely find the space to discuss and comprehend what the open problems are in that field. Nevertheless, all fields have such problems to resolve. The expansion of human knowledge has brought university teaching to a temtory that is completely different from where it started out. Nine hundred years ago, a welleducated person could be versed in several different disciplines to their full depth. Today, the discipline of physics alone corresponds to an immense world of knowledge. There are molecular physics, atomic physics, and nuclear and subnuclear physics, to mention only a few of the largest subdivisions in the science of physics. And even if we focus on subnuclear physics alone, we find that there are at least ten recognized specialities within this field.
31
Victor F. Weisskopf, Sidney D. Drell and Antonino Zichichi (Erice, 26 May 1963).
32 In mathematics there are hundreds of sectors, each of great size and interest. Nothing changes if we shift from so-called pure Science to applied Science. To speak of medicine is to give one name to a thousand sectors. So what happens to a young student who wants to know the latest in the sector that is most fascinating to him? A single University might have, at best, only a couple of first-order superstars, and they alone certainly cannot cover all the fields of human knowledge. It is possible to locate the specialists that work on the cutting edge in any particular sector, but it would be an impossible undertaking to seek to learn directly from all of them. You would have to travel all over the world to have the privilege to meet with them and to sit in on their classes. At Erice, anyone who participates in the courses of a particular School - the oldest is that of subnuclear physics - is called a 'student'. In reality, this usually involves young scholars who have successfully completed their university studies and who come to Erice to find out what the new problems are. This happens in all fields, which are too many to enumerate here. It suffices to say that there are over one hundred schools in existence. But what distinguishes Erice the most is the spirit that animates all participants, both students and docents. The primary objective is to learn. No diplomas or degrees of any type are given out. As it was nine hundred years ago. The student listens to the lesson and then, after a break for lunch, the fun part begins. The student can ask any question of the professor. Even the most banal - it will not be punished. It is in everyone's interest to know the thoughts of young brains upon their exposure to scientific findings about which they had, presumably, already imagined many details and specifics, but rarely the same ones that are mixing around in the head of the docent. For a single problem, there are many different approaches. This is the whole point of discussion groups. When a group of scientists gathers to address themes of great scientific novelty, almost anything can happen. Once the Scientific Director of the Zurich branch of IBM came to this School. On his return to Zurich, he resigned from his directorship in order to dedicate himself to an idea that came to him during the courses at Erice. That idea led him to discover high-temperature superconductivity: he was awarded with the Nobel Prize. We are talking about Alex Muller. This is one example of how new ideas can be born at Erice.
33
The father of the Theorem of Time, Professor Eugene W i p e r (on the left in the photo) and Professor Paul Dirac (on the right in the photo), father of the equation which brought to Antimatter, with Antonino Zichichi.
34 The example cited refers to pure scientific research, even if the applications of high-temperature Superconductivity will be of enormous interest for the transport of electric energy and thousands of other activities. The idea of greatest value to come out of Erice, over the course of long years ridden by oft-ignored conflicts involving many countries, is that of the Manifesto that has been self-declared by more than ten thousand scientists from one hundred and fifteen nations. The mission of that Manifesto is the fight against secret laboratories. There will come a day when anyone who does scientific-technological research in great secrecy will be indicted for crimes against Humanity. Opening the doors of scientific laboratories, whatever type of research this action might unveil, would not only provide new impetus to scientific research in all fields of knowledge. It would also derail the insane spiral of the arms race that today, after the fall of the Berlin Wall, has no further reason to exist. Maybe it seems utopian to think that it would ever be possible to root out the secrets of the scientific-techno-military laboratories. One thing, however, is certain. If we fail to do so, sooner or later the planet is destined to go up in smoke. The project of establishing a World Lab that is open to the best intellects, without racial, ideological, political, religious or geographical (East, West, North, South) barriers, is the fruit of a promise that the scientific community - led by Erice - has made for the sake of all those who love peace not only as a word, but also as something that they wish to construct day by day out of facts. As mentioned in the beginning, the scientists of Erice have given life to a new way of conceiving international scientific collaboration: without secrets and without frontiers. This is the Spirit of Erice. As an indispensable part of this collaboration, the Voluntary Scientific Service has the objective of developing all the poor countries (Southern) that are far below the scientific and technological levels of today's industrialised countries (Northern). The Voluntary Scientific Service has leave to realise projects that would require enormous sums of money if it were not able to count on the work offered by thousands of scientists and specialists who ask nothing in terms of stipends or compensation for the work they put in. This voluntarism touches all levels, up to the highest. In fact, contributors to our projects include protagonists of global prestige from Science, Technology and Medicine, among which are many Nobel Laureates.
35
The memorable session when the putsch in Moscow destroyed Gorbachev’s attempt to bring the Soviet Union adiabatically towards democracy. All these events are closely connected with the activities at the EMCSC, with the World Federation of Scientists as the main point of reference for all problems. For example, when the putsch took place in Moscow, the Soviet scientists present in Erice received an ultimatum to return home immediately. Most of them were with members of their family and I vividly recall their terror over the possibility of seeing their common nightmare materialise, the return of a Stalinist-type dictatorship in their country. I sent a telegram to Moscow asking for confmation of the order received by my colleagues in Erice. According to official news, the new government wanted to maintain international collaboration. An order to go back was in contradiction with official statements by the new Soviet government and would have produced serious consequences in the international scientific community. That telegram allowed the USSR scientists present in Erice with their families not to immediately obey the peremptory order received. Fortunately the putsch was quickly over, and the figure above shows a picture celebrating the end of the terrible hours, when we all were convinced that the world was going to be confronted with another long period of cold war. The putsch was really a terrible surprise. In fact, before the destruction of the Berlin Wall, the scientific community participating in the Erice Seminars was optimistic enough to start considering the problems that the planet would have to face once the East-West confrontation was over. This is how the first studies on the Planetary Emergencies started.
36 The results obtained thanks to the Voluntary Scientific Service are born of the Spirit of Erice and demonstrate the importance of our promise as scientists from the industrialised countries (North) to create a scientific solidarity towards the states who are needy in every respect (South). To overcome the gap that grows bigger every year between the poor countries (South) and the rich (North), it is not enough to offer food and medicine. The poor countries (South) also need to learn how to approach and resolve with their own intellectual energy - the problems that inhibit their own development. The rich countries (North) must help them, not with wasted economic energy invested in useless projects, but intellectually and materially with concrete projects whose precise objectives have been elaborated by the scientific community in close collaboration with the best intellectual energy of the needy states themselves. Without the groundwork that the scientific community of Erice was able to establish over forty years of activity, NorthSouth relations would probably be fixed on a collision course. It is the Spirit of Erice that allows us to hope that it may be possible to avoid this. The Centre for Scientific Culture “Ettore Majorana” is neither an Academy nor a University like those familiar to everyone. It is an Institution born in the heart of the Frontiers of Science through the work of Bell, Blackett, Rabi, Weisskopf and the writer himself. We are talking about exponents of Science who will be remembered in the History of Physics of the 20th century. Patrick M.S. Blackett, Nobel Prize winner, Lord and Grand Admiral of the British Navy, discovered the first example of simultaneously produced “electron-antielectron”. In his Laboratory were discovered the first “strange particles”, so called because no-one had predicted or foreseen them. These particles would open up a new frontier in the Subnuclear Universe. Isidor I. Rabi, discoverer of the effect that bears his name, was awarded the Nobel for this; to him we owe the creation of the prestigious School of Physics of the University of Columbia in New York, and of the Scientific Committee of NATO: an enterprise of formidable originality in that it linked a military structure - with the end of defending Europe from the danger of invasion by the USSR - to the Science of free and democratic countries. It was the Laboratory of Columbia University in New York that brought in Enrico Fermi when he was unfortunately required to leave Italy because of racist laws.
37
8 May 1993, Professor Zichichi delivers to the Holy Father - on behalf of the ten thousand scientists of Erice - his welcome address.
38 Victor F. Weisskopf is a mythic figure of European Science. I was, with John Bell, at the beginning of my scientific career when Europe took its first steps towards building a structure that was capable of competing with the gigantic USA. This structure came to be called CERN (European Nuclear Research Center) and is located in Geneva. The mandate of this structure as desired by Rabi, Blackett and Niels Bohr (one of the founding fathers of Quantum Physics) was to evade the flight (the real one) of intellectuals towards the United States of America. CERN, endowed with the most advanced technological machines, was a necessary but insufficient condition for creating a global pole of attraction for the generations of European physicists who were already travelling outside of their home countries. In addition to its technological structure, CERN needed a leader. It would have to be a great scientist, an exceptional master capable of sparking new interests. Weisskopf was the first physicist in the world to calculate the “virtual” effects, in those days called the “polarisation of the vacuum”. Effects that are now the daily bread on the Frontiers of Physics. We would never have been able to reach the Supenvorld frontier if we had not been able to introduce “virtual” effects into the study of Galilean reality. Weisskopf led CERN to the centre of global scientific attention. John Bell, of my generation, became famous for discovering how to resolve the “paradox of Einstein-Podolsky and Rosen”, abbreviated as “The EPR paradox”. These giants of Galilean Science of the 20th century signed on the 8th of May, 1962, in Geneva, the constitutive act of the Foundation and Centre for Scientific Culture “Ettore Majorana”. And so was born the so-called “Erice Centre” with its intent to give new meaning to Science and its culture. Science, said Fermi, enters into society through its applications, not based on the merit of its values. This is why it is necessary to distinguish scientific culture from its vulgarisation. The Erice Centre has demonstrated how it is possible for the values of science to enter into the culture of our times. On this subject, we need to overcome the paradox that has led us to the vulgarisation called “scientific”, which does not distinguish Science from Technology and has never had the courage to denounce political and economic violence. Because it did not denounce the actual roots of the arms race and the ravages of industrialisation, the greater public was exposed to the thesis that it was scientific progress itself, along with its founding fathers, that was responsible for a planet full of bombs and 53 planetary emergencies.
39 eETTORE W O R A N A s CENTRE FOR SCIENTIFIC CULTURE
Erice Statement It is unprecedented in human history that mankind has accumulateds u c h a m i l i y power to destroy. at once,all centres of civilizationin the world and to affect some vital properties of the planet.
The dnnger of a nuclear holocaust is not the unavoidable consequence of the great development of pure Sciena. In fact, Science is the study of the Fundamental Laws of Nature. Technology is the study. of how the power of
Here are our proposals:
1. Scientists who wish to devote all of their time, fully, to study theoretically or experimentally the basic laws of Nature. shouldin nowesufferfor thisfree choice, to do only pun Science. 1. All Governmentsshddmduevery effort to redua or eliminate nstrictions on the free flow of information,ideas and people. Such restrictions add to suspicion and animosity in the world.
manldndcanbeincreased.
3.
Technology can k for peace and for war. ’Ihe choice &tween peace and waris notascientific choice. It is a cultural one: the culiure of love produces peaceful technology. The culture of hnrred produces inshuments of war. Love and hatred have existed forever. In the bronv and iron ages, notoriously pre-scientiftc. mankind inventedandbuiltmkforpaandinsuuments of war. In the so called ’modern era” it is imperative that culture of love wins.
4.
An e n o m u s number of scientists share their
AII Governmentishouldmakeeveryeffort to reduce secrecy in the technology of defensc.The practice of secrecy generates hatred and distrust. To start a ban for military secncywillcreatcgsfabiliiy than offend by deterrence done.
AU Governments should continue their action to prevent the acquisitionof nuclear weapons hy additional nations or nonnational groups.
5 . All Govanmentsshould makeevcryefforI to reduce their nuclearweaponsstodrpiles.
t i n s between pure SdEncersearchand military applications. This is a fundamental sourn for the arms race.
6. AU Govemmmts shouldmakcevery effort to reduce the causes of insecurity of nonnuclear powm.
It is necessary thata new trend develops. inside thescimtificwmmunityommunityandonanintcmationa basis.
7.
It is of vital impoltance to identify the basic factors needed to start an effective process to protect human life and culture from a third and unprscdwrtcd catastrophicwar.To acwrnplish thisitkneccssarytochmgethepccmovement fmm a uniiawal action to a mtly international one involving proposals based on mutual and m e undemandmg.
Conclusion
All Govemmnu shouldmake evcry effort to ban all types of nudear tests in w a ~
technology.
Thosescientists-inthe EastandintheWest who agree with this &rice Statemenb, engage themselves morally to do everything possible inordatomaLethenewfrcnd,outlkd in the p e n t document,komeeffectivc all the world over and as s w n as possible.
-
This Statement was written in ERICE, August 1982. by Paul A.M.DlRAC. Piow ICAPTZA and Antonino ZICHICHI. During the COUM of the thrce years (1982 1985) it has been signed by Tw THOUSAND scientists, the world over.
-
The Erice Statement.
40 So phrases like “father of the atomic bomb” and “father of thc H-bomb” were coined, but without mentioning that the true fathers of these two devices were, respectively, Hitler and Stalin. It was they who gave life to the secret projects for those bombs. It is with the Erice Manifesto that justice was brought to bear on these cultural delusions, a justice that would be realised in 1989 with the collapse of the Berlin Wall. And it was always the Erice Centre that gave the example of the role of Science in confronting and resolving the problems that afflict this marvellous space ship in its voyage around the Sun. At Erice, in fact, will be founded the first nucleus of a new Laboratory with the mandate of studying Planetary Emergencies; not only the two most famous (the Greenhouse Effect and the Hole in the Ozone), but all 53. The new Laboratory is called ILSEAT (International Laboratory for Science Engineering and Advanced Technology) and has been inserted by the Governor of Sicily - Hon. Salvatore Cuffaro - into his government’sprogram. The international scientific community, grateful for the diligence that exponents like those cited - Bell, Blackett, Rabi, Weisskopf - and hundreds of other prestigious figures of Science have exhibited, has welcomed the enthusiastic interest of Governor Cuffaro for this new scientific-technological reality that is rooted in 40 years of activity at Erice. This, however, concerns the future.
THE ACADEMIA EUROPAEA
SIR ARNOLD BURGEN Downing College, The University of Cambridge, UK The creation of the Academia Europaea arose out of a suggestion made at a meeting of European Ministers of Science in 1985 that there was the need of a European Academy of Science. The proposal was developed through meetings at the Royal Society in London. These meetings led to meetings of senior European Scientists and to the creation of the Academia Europaea which held its Foundation Meeting in Cambridge in 1988. The core idea was that closer cooperation in Europe meant more than political and economic arrangements; there needed to be understanding and appreciation of the historical and social differences that focus national attitudes and that there was much to gain in bringing scholars together who might then learn to act in concert rather than in isolation or rivalry. It was agreed that the Academia should cover the whole of knowledge and include the whole of geographical Europe. These proposals need further clarification. At the time of the Academia’s foundation, the European Union represented only some of the countries of Western Europe and excluded those countries in Europe that were still within the Soviet pale and that later were undertaking reorganisation after the disappearance of Soviet control. Yet the scholars of these countries needed especially to be free to become members of any new Academy, which was to include the area that could be considered as sharing the Western heritage of ideas and culture. These features were, of course, not exclusive, having spread to the Americas and elsewhere, but could be contrasted with the cultures of the Middle East and Far East which were rising in importance. In most countries of Europe some version of a National Academy existed with varying degrees of competence, ranging from ones that were little more than social clubs to those that had control of a major area of national activity in the humanities and the natural sciences. For scholars from all these countries it was anticipated that the new Academy would provide wider horizons. The past century has been a period of unparalleled growth in knowledge in all areas but particularly in the natural sciences. The result has been both a growth in specialisation and compartmentation of new knowledge, but also in the formation of new subjects that broke away from their original affiliations and left their parent subject emasculated. In some cases, a great new idea or technology, such as molecular genetics or information technology, has drawn together contributors from a wide range of disciplines, but often at the price of incomprehensibility for those not in the know. Despite this there has been a general feeling that there were great dangers in new developments that occurred out of context, notably new developments that might affect the environment or raise serious problems for the future of mankind. These questions could best be dealt with by a program of cross-disciplinary contact and discussion. This then was the background against which the Academia Europaea was created and we now need to examine how it has tried to respond to these ideals and the problems encountered in trying to do so. The Academia now has over 2000 members elected from
41
42
across Europe and also has some 100 foreign members, most of whom have close connections with Europe. There are no quotas for different countries, but members are elected on their merits, indeed the quality can be judged by the fact that nearly all European Nobel Prize winners are members; the size of the Academia is probably too small and there are many fine scholars who are not yet members, but a balance needs to be found between inclusiveness and the unwieldiness of excessive size. The Academia has held its annual conferences at locations in 14 countries (London, Strasbourg, Heidelberg, Budapest, Uppsala, Parma, Krakow, Barcelona, Gent, Basel, Copenhagen, Prague, Rotterdam, Lisbon and will be held this year in Graz). The theme of these meeting has always cut across disciplines; examples are "The impact of modem biology on the politics, culture and economy in Europe' (1998), "The sea in the Culture of Europe (1999), Concepts of Time (2000), "Risks" (2002), and "What makes us Human?"(2003). A problem that affects a European academy more than a National one is the ease of access of members to meetings. Whereas for a National academy, the distances that members need to travel to attend meetings may be relatively short, for the attendance at meetings of a European academy, the distances are relatively large and this has meant there was a preponderance of members from the local area where any Annual meeting was held. In 1999 the practice was introduced of inviting a group of young scholars from the host country to be the guests of the Academia, thus preventing it being only a matter of interest to older well-established scholars; this has proved to be a popular innovation. A feature of the Annual Meeting has been a keynote lecture (Erasmus Lecture) by a distinguished scholar covering such topics as "Can there be a European Law? (Emst Mestmacker), National identity and the formation of States (Alan Touraine), Power and insecurity in Europe (Lawrence Freedman) and Language and the evolution of the Human mind (Hubert Markl). Some of the difficulties of communicating across Europe can be overcome by printed publications and the European Review was designed with this in mind. It is a quarterly publication, whose aim is that the spread of the subject in every issue should be such, that there should be at least one article that appeals to the diversity of its readers, i.e., those who are not too narrow in their reading! This is not an easy objective to achieve, but it appears not to have missed the mark too often. The Review features what it terms Focuses in which a topic is explored by several authors; recent ones have been on "Quality of life for the Elderly","Risk", "China, Tradition and Modernity", "Japan and Europe, "Building social cohesions", "The future of Universities", "The Theatre" and a current one is on "History and memory" recounting how the population of various countries regard their conduct during World War I1 seen from a fifty year later perspective. The Academia has undertaken a series of special studies notably in the field of education, an area that is in a condition of stressful evolution everywhere in Europe, the old certainties being overturned by the desire to educate as many people as possible to a high level. The most recent publication is "Excellence in higher education", published this year, the result of a conference held jointly with the Wenner-Gren Foundation. Other themes have been "Psychosocial studies in Young People (1995)", "The idea of Progress"
43 (1 997), “Growing up with Science” (1997), and “The impact of electronic publishing on the academic community” (1998). In 1993 the Academia instituted a programme of Prizes to young scholars to aid the development of science and scholarship in Russia. This involves the award of twenty-five prizes each year and has gained support from the Soros Foundation, the Rayne Foundation, Amersham International, the International Science Foundation and others. It is one of the most successful activities that the Academia has initiated. These are some of the achievements of the Academia Europaea to date; there is feeling that more could have been achieved. What has inhibited a greater range of developments and impact? Two major constraints have been felt. The first is finance. It is much more costly to support a Europe-wide activity than one confined to a single country and yet it is very much more difficult to achieve financial support for an organisation that is European. It simply does not offer the local and political attractions to it support from governments that National academies can command, nor does it attract most National Foundations; we can be very grateful that some Foundations have been exceptions to this generalisation and have been such warm supporters. It would seem obvious that the European Commission should welcome such an organisation contributing so effectively to a European spirit in scholarship, but in fact its statutes have prevented its giving more than support for speakers at some of the meetings. The financial situation remains at present a serious brake on the development of new activities. The second restraint is related to the very broad perspective of the Academia. For instance, while the breadth and quality of the European Review is applauded, this puts it very low on the priorities of University and other libraries that are under intense pressure to subscribe to specialised publications which are appearing in ever increasing numbers, and in consequence its circulation is unduly restricted. It takes time for a new academic activity to make its mark on the international scene, and by the standards of the other academies represented at this meeting, the Academia Europaea is hardly into childhood! However, the time is ripe for an increase in its influence in Europe, notably through new activities now in preparation. There are so many instances in the modern world where a balanced judgement is required between interests that press their case without regard for opposing views and particularly where sound judgement is distorted by populist or political pressures; the Academia is in the position of being able to mobilise experts from across Europe to provide a balanced and well argued evaluation of the total picture.
This page intentionally left blank
2.
CLIMATE: GLOBAL WARMING
This page intentionally left blank
CLIMATE CHANGE AND THE COSMIC RAY CONNECTION NIR J. SHAVIV Racah Institute of Physics, Hebrew University of Jerusalem, Israel ABSTRACT We review the evidence linking cosmic ray flux (CRF) variations to global climate change. In particular, we summarize recent results demonstrating that the long-term CRF variability associated with galactic spiral arm passages is the dominant climate driver over geological time scales. This can be concluded from the large correlation apparent between the reconstructed CRF history and the geologically reconstructed temperature record, and the lack of any correlation with the amount of atmospheric C02. The result can be used to quantify the CRF/temperature link and place an upper limit on the atmospheric response to COz variations. In turn,we show that this can be used to resolve the faint sun paradox and understand the origin of the global warming observed over the past century. INTRODUCTION - THE ALLEGED COSMIC RAY FLUX-CLIMATE LINK Accumulating evidence suggests that solar activity is responsible for at least some climate variability. This is indicated by the numerous correlations apparent between solar activity and either direct climatic variables or indirect climate proxies over time scales ranging from days to millennia (e.g., Herschel 1796, Labitzke and van Loon 1992, Eddy 1976, Friis-Cristensen & Lassen 1991, Soon et al. 1996, Soon et al. 2000, Beer et al. 2000, Hodell et al. 2001, Neff et al. 2001). It is therefore hard at this point to argue against any causal link between solar activity and climate on Earth. Nevertheless, the above correlations do not indicate which one of several possible pathways is actually responsible for the above correlations, with probably more than one mechanism at work. The most obvious link would be through direct changes in solar luminosity. However, the climatic variability attributable to solar activity is a few times larger than could be inferred from the 0.1% typical change observed in the solar irradiance (Beer et al, 2000, Soon et al. 2000). Namely, the variability in the thermal flux itself appears to be insufficient to explain the climate variations observed. Two possible pathways were suggested through which solar activity could be amplified and more effectively generate climate variations. First, if the global climate is sensitive to the amount of tropospheric ionization, it should be sensitive to solar activity as well. This is because the solar wind modulates the CRF, and with it, the amount of tropospheric ionization (Ney 1959). A second possibility includes the large UV flux variations. Since this flux is the primary source of heat in the stratosphere, it could affect, for example, the Hadley circulation (Haigh 1996). Correlations relating CRF modulations to climate on Earth exist on two time scales. Over the solar cycle, the solar wind strength varies considerably and, as a result, the amount of tropospheric ionization changes by typically 5%-10%. Svensmark (1998, 2000) and also Palle-Bag0 & Butler (2000) demonstrated that the variations over two decades in the amount of low altitude cloud cover nicely correlates with the CRF
47
48 reachipg Earth. Both signals, of the cloud cover and the CRF, lag by typically half a year behind other solar activity indices. This implies that it is more likely that the cloud cover is directly related to the CRF and indirectly to solar activity, than directly to solar activity. The reason the CRF lags behind solar activity to begin with is because it takes a few months for the solar wind to reach the heliopause and several months for the cosmic rays (CRs) to diffuse from the heliopause to Earth. Over the short time scale of days, various correlations relate CRF variations during Forbush events to various climatic variables. These events are marked by a sudden reduction in the CRF and a gradual increase over typically a 10-day period. Tinsley & Dean (1991) have shown that the Vorticity Area Index over Oceans is correlated with Forbush decreases. Similarly, Pudovkin & Veretenenko (1995) reported a cloud cover decrease (in latitudes of 60N-64N, where it was measured) synchronized with Forbush events. Later, a link between Forbush decreases and rainfall has also been claimed. Stozhkov et al. (1995) found an average 30% drop in rainfall in the initial day of a Forbush event over 47 events recorded during 36 years in 50 meteorological stations in Brazil (statistically significant to 30). While in Antarctica, Egorova et al. (2000) found that on the first day after a Forbush event, the temperature in Vostok station dramatically increases by an average of 10°K. And last, Todd & Kniveton (2002) have recently shown that Forbush decreases are associated with global decreases in cloud cover. Since Forbush events are relatively short, they can be more easily used to separate a CRFklimate effect from other possibilities. The reason is that over the time scale of a few days, there is significantly less correlation between CRF variability and other solar activity indices, such as UV or protons. And indeed, Tinsley & Dean (1991) have shown that the correlation they obtained completely disappears when the VAI is compared with UV changes. Similarly, Egorova et al. (2000) have shown that there is no measurable signal in sync with solar proton events. Moreover, a climatic correlation with Forbush events cannot arise from hypersensitivity to U V , since the latter is absorbed in the stratosphere, which cannot significantly influence tropospheric weather on the time scale of days. Given the above evidence, it is therefore quite reasonable to claim that cosmic rays affect the global climate and the link is most likely through control of the tropospheric ionization, which modifies the cloud formation process. Although the actual link is not critical to the rest of our discussion, it is interesting that several new results indeed point to the validity of this mechanism. Two particularly interesting results should be mentioned. First, Hamson & Aplin (2001) found experimentally that the formation of cloud condensation nuclei (CCN) is correlated with natural variability in the CRF (e.g., due to statistical variability of CR air showers). In other words, this link is more than hypothetical. Second, using an advanced particle microphysics model, Yu (2002) discovered that a scenario in which the formation of CCN depends on tropospheric ionization, explains several interesting observations. Yu found that at low ionization rates, CCN formation increases with increasing ionization rates. This is because the system i s “ionization limited”. However, at large enough ionization rates, the formation of CCN decreases. This result, which at first might seem counter-intuitive, stems from the fact that if the ionization rate is large enough, condensation nuclei cannot grow quickly enough after they form, without discharging themselves through ion-ion recombination. The net result is that increasing the ionization
49
rate increases the CCN density at low altitudes, but it reduces the CCN density at high altitudes. This result is interesting as it implies that global warming induced by solar activity modulating the CRF should increase the surface temperature without changing the average tropospheric temperature by much. This offers an elegant solution as to why global warming in the past few decades was observed primarily near the surface but not by satellites and only marginally by balloons, since greenhouse gases are expected to heat the tropopause more uniformly. LONG TERM CRF VARIATIONS AND THE GEOLOGICAL RECORD An altogether independent piece of evidence linking CRF variability to temperature on Earth is the circumstantial link apparent between long-term CRF variations and geological reconstructions of climate on Earth. Long-term CRF variations arise because the solar system changes its galactic neighborhood as it revolves around the galaxy. When nearby supernova events are more numerous, more cosmic rays are accelerated in our galactic vicinity and the CRF reaching Earth is respectively higher. Moreover, studies of spiral galaxies like our own reveal that supemovae (SNe) occur predominantly in the vicinity of the galactic spiral arms. This is unsurprising considering that spiral arms harbor most of the star formation activity, and that massive stars live relatively short lives so that they are born and die within the spiral arm vicinity. The net result is that the CRF near spiral arms can be significantly higher, by a factor of a few, over the flux between the arms (e.g., Shaviv 2002b and references therein). Using astronomical data, the difference between the solar system’s angular velocity Qsun around the galaxy and the spiral arm pattern speed SZ, is found to be R,,,R,=ll.l+2.0 (km/s)/kpc (Shaviv 2002b). Since the Milky Way has 4 spiral arms, this angular velocity difference translates to a spiral arm passage every 1342~24Myr. Thus, the astronomical prediction is that the CRF reaching Earth should vary with the above period. If the CRF/climate link is real, we should see a climatic affect with the above periodicity. Figure 1 plots the predicted spiral arm passages using the astronomical data and the ensuing CRF using a model described in Shaviv (2002a,b). This is then compared with the geological sedimentation evidence for the occurrence of ice age epochs (IAEs) or warm periods (i.e., “icehouses” and “greenhouses”), as summarized by Frakes et al. (1992) and Crowell (1999). Interestingly, Frakes et al. claim that the geological data is sufficient to claim periodicity. Crowell on the other hand, claims that the data is insufficient to make the claim of periodicity and indeed his epochs occur less regularly. Irrespectively, both records are consistent with each other, but more importantly, they are consistent with the predicted CRF using the astronomical data. The correlation between astronomical and geological data suggests that indeed there is a causal link between galactic geography and climate. It does not point to the actual mechanism of the physical link. A smoking gun, which would point towards the CRF as the culprit, would be any historic record that can be used to reconstruct the CRF and, with it, show that the predicted periodic CRF variations indeed exist and that the
50 CRF varies in concordance with climate variations. Such a historic record exists in the form of exposure ages of Iron meteorites'. Figure 1: The past Eon. Panel A describes past Galactic spiral arms crossings assuming L&-Q=lO.9 (km/s)Apc. Panel B describes the CRF reaching the solar system using a CR dirusion model (Shaviv 2002b). Note that the CRF is predicted to lag behind spiral aim crossings. This is portrayed by the hatched regions, which qualitatively show the predicted occurrences of IAEs, if the CRF required to trigger them is the averagejlux. Arrows mark the middle of the spiral crossing and the expected mid-glaciation point. Panel C qualitatively describes the geologically recorded IAEs-its top ha& as summarized by Crowell (1999), and its bottom ha& as summarized by Frakes et al. (1992). By jne-tuning the observed pattern speed of the arms to best j t the IAEs, an intriguing correlation appears between the IAEs and their prediction. Note that the correlation need not be absolute since additionalfactors may affect the climate. Otherfactors that should be considered and noted in the graph are: (1) The mid-Mesozoic glaciations are signzjkantb less extensive than others. (2) It is unclear to what extent the period around 700 Myr BP was warmer than the IAEs before or since. (3) Thejrst IAE of the Neo-Proterozoic (if indeed distinct) is very uncertain. (4) Since Norma's crossing is an extrapolation@om smaller galactic radii, its location is uncertain. r f the arm's structure at smaller radii is indeed direrent (Shaviv ZOOZb), its preferred location will lag by about 20Myr. Panel D is a 1-2-1 averaged histogram of the 41Kf0K exposure ages of Fe meteorites, which are predicted to cluster around the CRF minima. The cluster-IAE correlationJitrther suggests an extra-terrestrial triggerfor the glaciations.
Using a set of exposure-dated Iron meteorites it is possible to reconstruct the CRF. This is because the CRF is the 'clock' used to exposure-date the meteorites. As a consequence, variations in the CRF translate into distortions in the exposure ages. Using a statistical method described in Shaviv (2002b) these distortions can be used to reconstruct the CRF variations. The principle behind this new method is the fact that during periods with a low CRF, when the exposure age 'clock' ticks slower, many meteorites cluster around these apparent exposure ages. This is because long real time intervals elapse while the exposure clock measures a short interval only. Thus, the prediction is that a histogram of the exposure ages of iron meteorites will show clustering around epochs with a lower CRF, when a warmer climate is expected. This clustering is indeed apparent in the data, (see figure 1) providing an independent link between CRF variations and climate variations on Earth. The climate history used in the above comparison was a qualitative reconstruction using geological sedimentation records. It could only indicate the existence of cold or
' Chondrites are not useful for this purpose, since their typical lifetime is short.
51 warm periods. More information would be learned if we could compare the CRF variations to a quantitative paleoclimatic record. At the least, it would allow the quantification of the CRF-climate link. isotope ratio measurements of fossils from tropical Using thousands of '80/'60 oceans, Veizer et al. (2000) reconstructed the temperature over the Phanerozoic2. This record was subsequently used for the quantitative comparison between CRF variations and climate (Shaviv & Veizer 2003). The results are graphically depicted in figure 2, where it is apparent that the reconstructed CRF history, using the extra-terrestrial record of meteorites, has a remarkable correlation with temperature. Since the residual in the fit bears no resemblance to the variation of atmospheric C02, an upper limit to its effect can be deduced. Using a statistical analysis, the following conclusions can be reached: CRF is responsible for 75% of the long-term (>50 M y ) variance in the 1) tropical temperature. Thus, the CRF is by f a r the most dominant climate driver over long time scales. An upper limit on the effect of C02 can be placed. At lo, the upper limit on 2) the temperature increase resulting from a doubled amount of atmospheric C02 is about 0.9"C (it is about 1.5"C and 2.6"C at the 90% and 99% confidence levels). This can be translated into a limit on the global temperature response of radiative forcing. As compared with the response of a hypothetical black body, Earth, the upper limit on the response is h <0.75 (or h 4 . 2 or 2.2 respectively). Namely, it is more likely that Earth has negative feedbacks that stabilize its climate response more than the behavior of a black body. Figure 2: Variations in the cosmic rayjlwc (0) and tropical temperature 15 anomaly (AT) over the Phanerozoic. g 1 The upper curves describe the 8 05 reconstructed CRF using iron meteorite exposure age data (Shaviv, hanemzoic Temperature ' 2002b). The thick solid line depicts 2 the nominal CRF, while the shading delineates the allowed error range. The two dashed curves are additional CRF reconstructions that fit within 4 the acceptable range. The shortdashed curve describes the nominal -2 CRF reconstruction after its period *. (Cosmic Rays + /meor) was fine-tuned to best fit the low-500 -400 -300 -200 -100 0 latitude temperature anomaly (i.e., it is the 'Solid-line " reconstruction. after the exact CRF periodicity was fine tuned, within the CRF reconstruction error). The bottom long-dashed curve depicts the reconstructed temperature anomaly (AT from Veizer et al. 2000 smoothed with a SO Myr window). The solid line is the predicted AT,,,,,~e~ using the short-dashed curve above, while also allowing a secular, long-term linear contribution. The thin line is the residual. The largest residual is at 250 Myr BP, where only a few measurements o f 8 ' 0 exist due
-
e- ,
2
The Phanerozoic is the last 550 M y from which fossils of multi-cellular life exist.
52 to the dearth of fossils subsequent to the largest extinction event in the Earth s history. The top bars are icehouse periods according to Frakes et al. Figure 3: Plotted are the period and phase of four independent signals, their agreement indicates that the CRF/climate link is highly probable. The phase is defined as the time t,, before present when 150 the cosmic rayflux is largest or the climate is warmest divided by the respective signal period. The four signals are the predicted 3 0 . CRF from astronomical data (using 11100 measurement of the pattern speed of the T I galactic spiral arms and a dij&sion model .-L0 . for the cosmic rays), reconstruction of the L cosmic ray flux in the solar system (using the exposure ages of Iron meteorites), occurrence of ice-age epochs (ice-houses) and warm periods (greenhouses) using 00 geological sedimentation records, and laslyt, the reconstructed tropical sea temperature (using ' 8 ~ / ' 6isotope ~ ratio in fossils).
-
Icehouse-Greenhouse EmkesetaliGaweU
Cosm'cRay fromFeMeteor'ter
-
-
v
0.2
Phase 0.4 = t,JP0.6
0.8
1
Figure 3 is a graphical summary of the above correlations. It depicts the period and phase of the four independent signals, showing their mutual agreement. Two of the signals are extra-terrestrial in nature. One is the predicted CRF variations arising from spiral arm passages as calculated using astronomical data. The other signal is the measured periodic CRF history using Iron meteorites. The two other periodic signals describe climate variations. One is of the coldwarm epochs as apparent from the geological sedimentation record, while the other is the reconstructed tropical temperature. The fact that the four results for the phase and period are in agreement, demonstrates that it is more than a statistical fluke. Moreover, since there are two independent extraterrestrial records and two independent climate records, the result is redundant. Armed with the quantitative relation between CRF variations and temperature change, as well as with the limit on the climatic effects of variations in the amount of atmospheric COz, we can proceed with application of these results. IMPLICATIONS OF THE CRF CLIMATE LINK - THE FAINT SUN PARADOX Standard solar models predict a solar luminosity that gradually increased by about 30% over the past 4.5 billion years. Under the faint sun, Earth should have been frozen solid for most of its existence. Yet, running water is observed to have been present since very early in Earth's history. This enigma is known as the faint sun paradox. The CRFklimate link has the power to significantly extenuate this problem (Shaviv 2003). To reach this conclusion, we should consider that the young sun was rotating faster. As a result, it must have had a higher non-thermal activity and, with it, a stronger solar
53 wind. Since the solar wind slows the flux of galactic CRs, the stronger wind necessarily lowered the CRF reaching Earth. A lower CRF would in turn translate into higher temperatures through the CRF/climate link. Thus, a stronger solar wind should have had a warming effect that acted to compensate the solar dimness. The effect can be quantified. To do so, one can calculate the attenuation effect of an evolving solar wind. The evolution itself can be reconstructed by comparison to the observation of stellar winds of nearby solar-like stars. The slowly increasing CRF and the CFWtemperature relation can then be used to calculate the global temperature and compare it with the effect of the slowly increasing solar luminosity. The results from the detailed analysis (Shaviv 2003) are graphically depicted in figure 4. The crux of the results is the following: a) The increasing solar wind and the ensuing reduction in CRF can explain about two thirds of the temperature increase required to resolve the faint sun paradox. b) The remaining third or so of the discrepancy can be explained with greenhouse warming by modest quantities of atmospheric COz (-0.01 bar), amounts that are consistent with geological constraints. c) Once the Milky Way’s star formation rate is included, the coldest temperatures are predicted to have occurred over the past Eon, and the between 2 and 3 Gyr before present. Interestingly, these are also the only epochs during which Earth experienced glaciations. d) The compensation between solar luminosity increase and CRF decrease is now reaching its end, since the solar wind reduction of the CFG is now only -10%. This implies that solar luminosity increase will now contribute to a slow warming trend over the coming billions of years. Figure 4: Past and future tong term global temperature 40 changes. Model (A) describes the temperature predicted without a CRF-climate effect, 30 with a nominal contribution by C02 (0.01 bar before 3 Gyr, U then exponentially decreasing ::20 afterwards to current levels, e and constant thereafter), and a ED radiative feedback parameter 10 of II =2.2 (which is statistically inconsistent with the long term temperature reconstruction at 0 the 3clevel). Model (C) is the same as model (A), except that II=0.75, which is the nominal Time BP [Gyr] value from temperature-CRFC02 comparison over the Phanerozoic. Model (B) is the same as model (C) except that it assumes a constant CO2 contribution. Model (0)and associated lines are the same as model (C) with the CRF/climate link included using the nominal parameters. The thick dashed line assumes that the CRF reaching the outskirts of the solar system is constant. The thin solid line includes the
Y
54 variations in the Milky W q ’ s star formation rate, while the long dashed line includes the range of variability associated with spiral a m passages. The dotted line denotes the total variability arisingfrom all long-term intrinsic variations in the CRF.
IMPLICATIONS OF THE CRF CLIMATE LINK - UNDERSTANDING GLOBAL WARMING Doubling the amount of atmospheric COz results with a radiative forcing of about 4 W/mz (e.g., IPCC 2001). This is hard to dispute. The big uncertainly in climate dynamics, however, is the global temperature response to this change in the energy budget. For a blackbody Earth, this response is 0.30°K / (W/m2). Earth however does not behave as a black body, and various positive and negative feedbacks act to either amplify or decrease this number by a factor h. Values obtained in global circulation models (GCMs) range between h=2 and 4.5, i.e., positive feedbacks act to significantly amplify the temperature response (e.g., IPCC 2001). There are however debated claims that various negative feedbacks should be considered, yielding a much smaller h of 0.5-1.3 (Lindzen, 2001), or even -0.2 (Ou, 2001). This number was also obtained from analyses of actual temperature variations. For example, from the rather quick return of the global temperature to its “average” after a series of volcanic eruptions, it was deduced that h-0.5 with h>l being inconsistent with the effects of volcanic driving (Lindzen & Giannitsis, 1998). On the other hand, Gregory et al. (2002) find that a lower limit of h=1.3 (at 95% confidence) can be placed by considering the increase in the ocean heat content. Another constraint was obtained by Covey et al. (1996) who compared COZ changes to temperature variations over geological time scales (in particular, the last glacial maximum, the Cretaceous and early Eocene). The authors of this study found that h is consistent with the range of values obtained in GCMs. However, the latter two constraints were obtained while neglecting the possible forcing of the CRF and the non-thermal solar activity. Thus, if CRF/climate link is real, these two constraints are not applicable anymore. In the previous sections, we have seen that once CRF variations are considered over geological time scales, a marked correlation is obtained between CRF as a driver and temperature change. This correlation, combined with the lack of correlation with COz variations, yields h < 0.75 (or h< 1.2 or 2.2 at 90% or 99% confidence level). Thus, ifthe CRFklimate connection is valid, which appears to be the case from the high CRF/AT correlation, we should consider h 0.75 as the most probable value. What does this result imply in the context of global warming? According to the IPCC, the radiative forcing due to anthropogenic emissions since the 1 9 3 0 ’ ~ is~about 1 W/mz (IPCC, 2001). This does not include indirect forcing by the sun (through solar modulation of the CRF or other possible effects). The largest uncertainly in this number is the possible indirect effect of aerosols. For a GCM type climate response of h 2.5, this radiative forcing should have contributed a global warming of about 0.75”K. However, this assumes an equilibrium response. In reality, the oceans have a high heat capacity and it takes centuries for Earth
-
-
We use the 1930’s for comparison, because the high energy CRF data required for comparison is not available before.
5 to reach equilibrium. On the short time scale, only about 2/3’s of this increase should appear (e.g., IPCC 2001). Thus, 1 W/m2 and h 2.5 can explain the 0.4-0.5”K global warming observed since the 1930’s. This is the standard lore. However, the CRF climate link is only consistent with a small value of h. If we take h 0.75, the global warming associated with “standard” contributions is only about 0.23”K in equilibrium, or 0.17”K over short time scales, that is, only about a third of the observed warming appears to arises from human activity and other standard contributions to the radiative forcing (e.g., as mentioned in the IPCC report, 2001). Where do the other 2/3’s come from? By comparing the CRF to temperature change over the phanerozoic, it was obtained that a CRF increase of 1% should translate to a temperature decrease of about 0.10~0.05°K.Over short time scales, the CRF variations are not “intrinsic” but instead arise from solar modulation, which is energy dependant. When trying to predict the temperature increase associated with the increased solar activity over the past century, we should therefore use the CRF energies that are most likely responsible for the apparent climate effect. These energies are those penetrating the troposphere and which can reach low geomagnetic latitudes. This is because cloud microphysics is most likely governed by the low altitude tropospheric ionization rate. Thus, the flux measured at the University of Chicago Neutron Monitor Stations in Haleakala, Hawaii and Huancayo, Peru is probably a fair measurement of the flux affecting the climate. Both stations are at an altitude of about 3km and relatively close to the magnetic equator (rigidity cutoff of 12.9 GeV). Typical variations of about 8% are generally recorded in the above stations between solar minimum and solar maximum. Since neither station was active before the 1950’s, we can use ion chamber data for comparison. Ion chambers record typical variations of 2% over the solar cycle, while the average ion chamber count decreased by 1.O% over the past 70 years. This implies that at energies most likely responsible for the climate effect, the decrease was about 4%. Using the result of Shaviv & Veizer (2003), this translates into a temperature increase of 0.4k0.2”K assuming equilibrium, or more realistically, to about a 0.27*0.13”K increase over the past 70 years. Thus, the result obtained by comparing CRF variations to temperature change over the past 550 Million years first implies that climate has a low response to radiative forcing, such that about a third of the increase should be attributed to anthropogenic gases and other “standard” radiative forcings. Secondly, the CRF-temperature relation implies that about two thirds should be attributed to the increased solar activity, which lowered the CRF reaching the Earth. Together, the observed global warming is consistently explained. Interestingly, this result was already independently reached by other means. Since the observed global warming over the past century was not monotonic-it decreased between the 1940’s and ~O’S,one can fit the global warming to an anthropogenic contribution (which was increasing monotonically) and a solar activity contribution (which also decreased between the 1940’s and 70’s). This was done by Soon et al. (1996), who found that the best fit arises when somewhat less than half of the increase is attributed to an anthropogenic cause, while somewhat more than a half is attributed to solar activity, without specifying the physical link.
-
-
56 SUMMARY Ample evidence links CRF variations to climate change on Earth. A partial list of results is summarized in Table 1. The result by Shaviv & Veizer (2003) is particularly important as it can quantify the CRFhemperature relation and it can be used to place a limit on the climate response to the radiative forcing of C02 variations. It was found that periodic CRF variations arise from galactic spiral arm passages, giving rise to a clear periodic temperature variation over the past 550 Million years. Specifically, the CRF variations can explain about 75% of the total temperature variance observed. Since C02 is expected to have varied over the phanerozoic as well, it should have left a mark in the temperature signal. Since no clear temperature variations arising from C02 variability are apparent, an upper limit to the effect of COz can be placed. Table 1.: Published result Result A correlation between '*o/'~o in stalagmites in a cave in Oman and I4C in the atmosphere (Neff et al. 2001) A correlation between low altitude cloud cover and CRF variations over the solar cycle (Svensmark 2000).
A Correlation between Forbush events and vorticity area index (Tinsley & Dean, 1991)
A Correlation between Forbush decreases in CRF and reduction in global cloud cover (Todd & Kniveton 2001) A Correlation between Spiral Arm passages, reconstructed CRF history using Iron meteorites and ice-age epochs on Earth (Shaviv 2002a.b) A Correlation between the reconstructed CRF
(inkingCRF variations to climatc. Not inclusive. Meaning of the Result By far, the nicest correlation between solar activity (of which I4C is a proxy) and global temperature (to which the 180/160 isotope ratio is sensitive). It implies that a robust link between solar activity and climate should exist. CRF appears to affect climate through modification of the cloud formation process, and it provides a natural link between solar activity, which modulates the CRF, and climate. The lag between both the CRF and cloud cover signals, and solar activity, supports the idea that the soladclimate link is through CRs, as opposed to other solarclimate links. This link is a stronger indication that climate is affected by the CRF. This is because there is a lack of any correlation with solar activity indices. (Namely, it not just a lag between CRF variations and solar activity, as obtained over the solar cycle). This result reaffirms that the CRF/climate link is most likely though the modification of the global cloud cover, and it again supports the role of CRF in the solar/climate link, since no direct correlation was found with solar activity. The result decouples CRF variations from solar variability, since the former are intrinsic. This firmly convicts the CRF as a climate driver. It also shows that the CRF/climate link does not saturate at small CRF variations, and the effect can be large. It also offers a solid explanation for the almost periodic occurrence of ice-age epochs. The correlation between CRF variations and temperature variability quant$es the CRF/climate link, thereby allowing
57
history and temperature predictions using this link. The lack of any correlation with on Earth. (Shaviv & COz variations is used to place an upper limit on the climate between CRF variations and cloud condensation nuclei (Harrison & Aplin 2002 A Cloud microphysics simulation by Yu (2002)
I
This innovative experimental result demonstrates that higher CRF give rise to higher concentrations of cloud condensation nuclei. CRF/ionization/cloud-condensationpicture, which hitherto was almost exclusively theoretical. The numerical simulations by Yu have shown that the formation of cloud condensation nuclei should be affected by the CRF. The detailed predictions explain why low altitude clouds are most sensitive to the CRF and why most of the global warming over the past century was measured 1 near the surface but not in the upper troposphere.
The normalized CRF/temperature relation can be used to explain about two thirds of the faint sun paradox. Because the sun was more active in its youth, its stronger solar wind was responsible for preventing most of the CRF from reaching Earth. Through the CRF/climate effect, this translated into elevated temperatures, which tended to compensate for the effect of the dim sun. The effect can only explain about 2/3s of the discrepancy. The remaining 1/3 can be naturally explained with a modest contribution of atmospheric C02. A second application of the limit on the climatic response to radiative forcing, and the relation between CRF and temperature variations, was to understand the global warming observed over the past century. It was found that the combined astronomicallgeological analysis implies that about two thirds of the global warming over the past century should be attributed to increased solar activity (which reduced the CRF reaching Earth), while only about a third should be attributed to “standard” radiative forcings, arising primarily from anthropogenic activity. This research was supported by the F.I.R.S.T. (Bikura) program of the Israel Science Foundation (grant no. 4048/03). REFERENCES: 1.
Bazilevskaya, G.A.,Sp. Sci. Rev. 94,25 (2000)
2.
Beer, J., W. Mende, R. Stellmacher, Quat. Sci. Rev. 19, 403 (2000)
3.
Covey, C., L.C. Sloan, M.I. Hoffert, Climate Change 32, 165 (1996).
4.
Crowell, J.C., Pre-Mesozoic Ice Ages: Their Bearing on Understanding the Climate System, Vol. 192, Memoir Geological Society of America (1999)
5.
Eddy, J, Science 192,202 (1976)
6.
Egorova, L.Y., V. Ya. Vovk, O.A. Troshichev, J. Atmos. Solar-Terr. Phys. 62, 955 (2000)
5 8 Frakes L.A., E. Francis, J.I. Syktus, Climate Modes of the Phanerozoic; The History of the Earth S Climate Over the Past 600 Million Years (Cambridge University Press, Cambridge, UK, 1992) 8.
Friis-Christensen E. and K. Lassen, Science 254,698 (1991)
9.
Gregory, J.M., R.J. Stouffer, S.C.B. Raper, P.A. Stott, N.A. Rayner, J. Climate 15, 3117 (2002).
10. Haigh, J.D., Science 272,981 (1996). 11. Harrison, R.G. and K.L. Aplin, J. Atmos. Terr. Phys. 63, 1811 (2001). 12. Herschel, W., Philosophical Transactions of the Royal Society, p. 166 (1796) 13. Hodell, D.A., M. Brenner, J.H. Curtis, J.H. Guilderson, Science 292, 1367 (2001) 14.
IPCC - Intergovernmental Panel for Climate Change, Climate Change 2001: The Scientz3c Basis (Cambridge University Press, Cambridge, UK, 2001)
15. Labitzke, K. and H. van Loon, J. Clim., 5,240 (1992) 16. Lindzen, R.S., M.-D. Chou, A.Y. Hou, Bull. Am. Met. SOC.82,417 (2001) 17. Lindzen, R. S. and C. Giannitsis, J. Geophys. Res., 103, 5929 (1998) 18. Neff, U., et al., Nature 411,290 (2001) 19. Ney, E.P., Nature 183,451 (1959) 20.
Ou, H.-W., J. Climate 14,2976 (2001)
21.
Palle Bago, E. and J. Butler, Astron. Geophys. 41, 18 (2000)
22.
Pudovkin, M.I. and S. V. Veretenenko, J. Atmos. Terr. Phys. 57, 1349 (1995)
23.
Shaviv, N.J., Phys. Rev. Lett. 89,051 102 (2002a)
24.
Shaviv, N.J., New Astron. 8, 39 (2002b)
25.
Shaviv, N.J. and J. Veizer, GSA Today, July, p. 4 (2003)
26.
Shaviv, N. J., Submitted to J. Geophys. Res.-Space (2003)
27.
Soon, W.H., E.S. Posmentier, S.L. Baliunas, Astrophys. J. 472, 891 (1996)
28.
Soon, W.H., E.S. Posmentier, S.L. Baliunas, Annales Geophysicae 18, 583 (2000)
29.
Stozhkov , Yu. I. et al., I1 Nuovo Cimento C, 18, 335 (1995)
30.
Svensmark, H., Phys. Rev. Lett. 81, 5027 (1998)
31.
Svensmark, H., Sp. Sci. Rev., 93, 175 (2000)
32.
Tinsley, B.A. and G.W. Deen, J. Geophys. Res. 12,22,283 (1991)
33. Todd, M.C. and D.R. Kniveton, J. Geophys.Res.-Atmos.106,32031 (2001). 34.
Veizer, J., Y. Godderis, L.M. Franqois, Nature 408, 698 (2000).
35.
Yu, F., J. Geophys. Res., 107, SIA 8-1 (2002)
CLIMATE CHANGE EFFECTS ON SPECIES AND BIODIVERSITY A. TOWNSEND PETERSON Natural History Museum and Biodiversity Research Center, The University of Kansas, Lawrence, USA INTRODUCTION: CLIMATE CHANGE AS A REALITY Perusal of recent issues of any number of scientific journals reveals numerous contributions treating climate change ([Anon] 2003, Gong and Shi 2003, Rahmstorf 2003, Rosqvist and Schuber 2003, Steemers 2003). In general, observations indicate dramatic warming trends across recent decades, changes in precipitation, and increasing frequencies of extreme events (Houghton et al. 2001). A broad initiative of modeling climate futures using general circulation models (GCMs) has yielded a rich suite of future-climate projections based on different scenarios of atmospheric composition (Flato et al. 1999, Nakicenovic and Swart 2000, Pope et al. 2002). Both the reality of these changes and their causes have been the subject of considerable controversy. Regarding the former, although some details are debatable, the reality of trends post-1960 is difficult to dispute (Houghton et al. 2001). Regarding the latter, however, the issue is less clear-whether climate change is caused by anthropogenic activities or not has been debated extensively (Laut 2003, Solanki and Krivova 2003), not only in scientific circles, but also in political circles. In general, though, an enormous body of evidence now points towards (1) the observed changes being real long-term trends, and (2) anthropogenic influences in the form of increasing ‘greenhouse gas’ concentrations being the primary cause of the changes (Houghton et al. 2001). For the purposes of this COI 1 ’ 1 d o n , nonetheless, the root causes are less important than that the changes are tak *: )lace-I focus on the implications of changing climates for elements of biodiversity. THEORETICAL EXPECTATIONS FOR BIODIVERSITY Biological diversity (“biodiversity,” for short) exists in a context of climates and landscapes, and species are well-known to respond to climatic factors in their phenology and geographic distributions (Grinnell 1917, 1924, MacArthur 1972, Brown 1995). In general, for geographic phenomena, we can refer to the concept of an ecological niche, defined as the conjunction of conditions within which a species can maintain populations without immigration (Grinnell 1917, 1924)-this concept differs from later versions, which focused more on the role of species in ecosystems (Hutchinson 1957, MacArthur 1972), but offers a clear geographic perspective on species’ ecology and distributions. In brief, the ecological niche serves as one important constraint on species’ geographic potential. Recent work in theoretical ecology and evolution has treated the evolution of ecological niche characters quantitatively, and led to a remarkable convergence of thinking between numerous theoretical ecologists (Brown and Pavlovic 1992, Holt and Gaines 1992, Kawecki and Steams 1993, Kawecki 1995, Holt 1996b, a, Holt and Gomulkiewicz 1996). Because populations existing under conditions outside of a species’
6 0 niche are expected to decline and have greatly reduced fitness, they contribute little to species’ evolutionary dynamics. As a result, stabilizing selection is expected to be strong, producing conservatism in ecological niche characteristics in most situations. Although niches obviously do evolve (or all species would share the same niche!), this prediction suggests that species will frequently not change dramatically in their ecological profiles, particularly over short periods of time (as are important in considerations of climate change). Empirical results relevant to this question are now accumulating as well-in general, they amply confirm theoretical expectations. Tests developed to date include predictivity of the geographic potential of species’ invasions in nonnative areas (Peterson and Vieglais 2001, Peterson In press), predictivity of species’ range shifts during past climatic shifts (Martinez-Meyer 2002), predictivity among sister species pairs (Peterson et al. 1999), and predictivity across phylogeny (Rice et al. In press). In general, and not without exceptions (Peterson and Holt In press), the general prediction of conservatism in species’ ecological niche characteristics is confirmed. Species’ ecological niche characteristics thus constitute long-term, stable constraints on their distributional potential. Given this tight relationship between species’ geography and features of climate, what are theoretical expectations for species’ responses to changing climates? In general, for a population residing in an area with a changing climate, three possible outcomes can be considered (Holt 1990): 1. Track appropriate conditions - Here, dispersal abilities of a population permit invasion of new areas as they become habitable, and populations ‘left behind’ in the previous distributional areas decline towards extirpation. This tracking can be in spatial (range shifts) or temporal (phenological shifts) dimensions. 2. Evolve to meet the new conditions - Here, the population has sufficient genetic variation to permit adaptive evolution of niche characteristics, and populations adjust to changing conditions without range shifts. 3. Go extinct - Failing the two previous options, populations are located at sites with conditions falling outside of their ecological niche envelopes, and populations decline towards zero. These three possibilities represent the range of reactions for species under changing climates, and thus will provide a framework for the rest of this contribution. Indeed, they can be translated into more concrete expectations of species’ responses to climate change: 1. Poleward and upward expansion cf populations - With warming trends, in general, conditions will improve at the poleward and upward (elevationally) limits of species’ geographic distributions. 2. Equator-side and lower-elevation extirpation of populations - With general warming trends, conditions will usually worsen for a species along the equator-side and lower-elevation border of its range. 3. Phenological sh$ts towards earlier timing - For species that emerge, breed, or whatever in spring, after a cold winter period, phenological shifts earlier in the season are expected. For species responding to other suites of factors (e.g., onset of rains), phenological shifts may occur, but are less generalizable as to the form that they will take.
61
Extinction in bounded situations - Assuming that niche evolution is relatively uncommon, when species are limited from dispersal, they are expected to go extinct. Particularly frequent will be extinctions in bounded situationsmountaintops, islands, etc., in which no dispersal options exist. These expectations are derived directly from theoretical considerations, and provide a general rubric within which climate change effects on biodiversity can be considered. 4.
OBSERVED EFFECTS ON BIODIVERSITY The scientific literature is replete with examples of observed changes in phenology or distribution of species that are attributed to the effects of climate change (Bethke and Nudds 1995, Parmesan 1996, Allen and Breshears 1998, Inouye et al. 2000, Linderholm 2002, Walther et al. 2002, Beaugrand and Reid 2003, Cresswell and McCleery 2003, Crozier 2003, Inouye et al. 2003, Perfors et al. 2003, Saavedra et al. 2003). Given that species’ distributions are well-known to be fluid and changeable, some caution is merited in interpreting these attributions. Nevertheless, more detailed and controlled metaanalyses have confirmed that a great number of shifts are occurring and that they fit the generalities expected of climate-driven effects on species (Parmesan et al. 1999, Parmesan and Yohe 2003). Regarding the three theoretical expectations listed above, each has seen detailed consideration. Distributional shifts have been documented amply, as have phenological shifts, with species emerging, flowering, or breeding earlier with warming climates (Walther et al. 2002). Although these shifts are far from universal (Peterson 2003b), the overall pattern of changes nevertheless fits the specific climate change-related expectations listed above. Perhaps most interesting, however, are the cases of constraint that are now emerging. For instance, in a particularly well-known case, amphibians in many regions are experiencing catastrophic declines (Pounds and Crump 1994) at least in part attributable to aspects of changing climates. In at least one case (Pounds et al. 1999), these declines have been absolute, leading to the extinction of an amphibian species-this system was bounded, in the sense that the conjunction of forest and cloud layers that had provided the conditions necessary for the species to persist no longer existed. Other systems are now providing less dramatic but even better documented examples, in which evolutionary and/or dispersal capacities are likely to be insufficient to prevent extinctions or broad population losses (Etterson and Shaw 2001, Nadkarni and Solano 2002, Hoffmann et al. 2003, Zacherl et al. 2003). PROJECTED EFFECTS ON BIODIVERSITY Observed effects of climate change on species are fascinating, and provide a key documentation that theoretical expectations are more than just imagination, but rather are the first indications of what will likely prove to be a major concern in coming decades. While these observations are increasingly common, and have fit well with a priori predictions, they nevertheless do not provide the whole picture-if these changes are but thefirst steps, what form will subsequent steps take? That is to say, observed changes do not show us the whole picture of biodiversity consequences of climate change, and as a
6 2 result we must seek other, more anticipatory, means of understanding the dimensions of the challenge. Such anticipatory (predictive) approaches to understanding climate change effects on biodiversity have taken two general paths: modeling ecosystems and natural processes (not directly the focus of this paper), versus modeling individual species via ecological niche modeling (Peterson et al. In press). In the first case, a long evolution of increasingly complex models has produced an impressive literature regarding expected shifts and extents of ecosystems around the world (Prentice and Webb 1989, Neilson et al. 1992, Melillo et al. 1996, Neilson et al. 1998, Tian et al. 1998). Biogeography models evolved from statistically-based climate-vegetation classification approaches to process-based biome models (Box 1981). Newer ecosystemhiome models are increasingly processbased, rather than just based on statistical correlations in empirical data (Neilson et al. 1992, Prentice et al. 1992, Woodward et al. 1998). Most recently, however? a generation of dynamic global vegetation models has emerged (Prentice and Webb 1989), in which process simulation is even more biologically realistic, and results have been shown to be increasingly predictive of current patterns of ecosystem distribution. This direction of research, however, does not maintain a direct connection to individual species (the base elements of biodiversity), and so will not be explored fkther in this contribution. The second approach to anticipating climate change effects focuses on individual species’ ecological needs (modeling the ecological niche), and then uses the previous results demonstrating ecological niche conservatism to project ecological niche characteristics forward onto fiture climatic conditions. Ecological niches are modeled using a variety of tools, including statistical approaches (Huntley et al. 1995) and machine-learning approaches (Peterson et al. 2001), among others. While early implementations simply used imagined scenarios (e.g., +I% temperature, -5% precipitation) as scenarios (Carey and Brown 1994), recent work has used specific GCM scenarios as basis for more detailed projections (Peterson et al. 2001, Bakkenes et al. 2002, Berry et al. 2002, Midgely et al. 2002, Peterson et al. 2002, Midgley et al. 2003, Peterson 2003a, Siqueira and Peterson 2003). These species-specific approaches have now become sufficiently extensive as to provide some first indications of generalities that can be expected of climate change effects on biodiversity: 1. Extinctions - Species-specific models suggest that numbers of species likely to go extinct as a direct consequence of climate change effects vary dramatically among regions and among taxa, and can range from almost nil up to 40-50% in worst cases (Siqueira and Peterson 2003). 2. Idiosyncratic responses - Although on average future projections of species’ distributions obey the tendencies described above, many exceptions exist, emphasizing that species’ responses to climate change will be idiosyncratic and individualistic (Peterson 2003a). 3. Montane versus flatlands effects - A trend that appears to be emerging, although data are preliminary, is that montane systems will be considerably better buffered against horizontal climate change consequences than will be flatlands regions (Peterson 2003a). For instance, the Great Plains in the United States and the Cerrado in Brazil are both predicted to see rather severe climate change consequences for species (Peterson 2003a, Siqueira and Peterson
63 2003), whereas Mexico and the Rocky Mountains in the United States are expected to see less serious consequences (Peterson et al. 2002, Peterson 2003a). 4. Extinctions versus rearrangements - Although several major climate change impact surveys have indicated that numbers of species likely to go extinct as a direct result of climate change effects may prove relatively low, most species are expected to retract and expand their distributions here and there across their distributions, which will create many new ecological situationscommunities with unknown characteristics and behavior. 5 . Unknown environmental conditions - A major unknown in the implementation of these species-level models is how to treat new environmental combinations that emerge in future climate scenarios. This question is clearly related in many ways to the question of species’ adaptability. In sum, single-species modeling has now been implemented for sufficient regions and taxa as to permit a few first generalizations. Much more work, of course, remains to be done in order to provide a predictive view of likely future effects of climate change on biodiversity. SYNTHESIS Biodiversity depends critically on the distribution and configuration of climates across the Earth’s surface. Whereas climate change as a result of anthropogenic greenhouse gas emissions may be debatable, that climates are reorganizing worldwide is more difficult to deny. Biological species are, as expected, moving along with the climatic conditions, producing many range shifts and phenological changes that follow basic expectations about the behavior of biodiversity in a warming climate. Predictions and extrapolations for species’ geography based on climate change expected under global, climate model scenarios suggest strongly that the observed shifts are but the first step in a dramatic, broad-scale reorganization of global distributions of biodiversity. As a consequence, many shifts, disappearances, and invasions are expected as biodiversity responds to changing Earth climates. ACKNOWLEDGMENTS I thank valued colleagues David Vieglais, Enrique Martinez-Meyer, Ricardo Scachetti Pereira, and Adolfo Navarro-Sigiienza for many helpful discussions and ideas. This research was supported by grants from the U.S. National Science Foundation.
64 REFERENCES: 1. Allen, C. D., and D. D. Breshears. 1998. Drought-induced shift of a forestwoodland ecotone: Rapid landscape response to climate variation. Proceedings of the National Academy of Sciences USA 95:14839-14842. 2. [Anon]. 2003. Congress considers climate change. Chemical Engineering Progress 99:23-23. 3. Bakkenes, M., J. R. M. Alkemade, F. Me, R. Leemansand, and J. B. Latour. 2002. Assessing effects of forecasted climate change on the diversity and distribution of European higher plants for 2050. Global Change Biology 8:390407. 4. Beaugrand, G., and P. C. Reid. 2003. Long-term changes in phytoplankton, zooplankton and salmon related to climate. Global Change Biology 9901-817. 5. Berry, P. M., T. P. Dawson, P. A. Harrison, and R. G. Pearson. 2002. Modelling potential impacts of climate change on the bioclimatic envelope of species in Britain and Ireland. Global Ecology and Biogeography 11:453-462. 6. Bethke, R. W., and T. D. Nudds. 1995. Effects of climate change and land use on duck abundance in Canadian prairie-parklands. Ecological Applications 5588600. 7. Box, E. 0. 1981. Macroclimate and plant forms: an introduction to predictive modeling in phytogeography. Junk, The Hague, Netherlands. 8. Brown, J. H. 1995. Macroecology. University of Chicago Press, Chicago. 9. Brown, J. S., and N. B. Pavlovic. 1992. Evolution in heterogeneous environments: Effects of migration on habitat specialization. Evolutionary Ecology 6:360-382. 10. Carey, P. D., and N. J. Brown. 1994. The use of GIS to identify sites that will become suitable for a rare orchid, Himantoglossurn hircinum L., in a future changed climate. Biodiversity Letters 2:117-123. 11. Cresswell, W., and R. McCleery. 2003. How great tits maintain synchronization of their hatch date with food supply in response to long-term variability in temperature. Journal of Animal Ecology 72:356-366. 12. Crozier, L. 2003. Winter warming facilitates range expansion: cold tolerance of the butterfly Atalopedes campestris. Oecologia 135:648-656. 13. Etterson, J. R., and R. G. Shaw. 2001. Constraint to adaptive evolution in response to global warming. Science 294:151-153. 14. Flato, G. M., G. J. Boer, W. G. Lee, N. A. McFarlane, D. Ramsden, M. C. Reader, and A. J. Weaver. 1999. The Canadian Center for Climate Modelling and Analysis Global Coupled Model and its climate. Climate Dynamics in press. 15.Gong, D. Y., and P. J. Shi. 2003. Northern hemispheric NDVI variations associated with large-scale climate indices in spring. International Journal of Remote Sensing 24:2559-2566. 16. Grinnell, J. 1917. Field tests of theories concerning distributional control. American Naturalist 51:115-128. 17. Grinnell, J. 1924. Geography and evolution. Ecology 5:225-229. 18. Hoffmann, A. A., R. J. Hallas, J. A. Dean, and M. Schiffer. 2003. Low potential for climatic stress adaptation in a rainforest Drosophila species. Science 301: 100102.
65
19. Holt, R. D. 1990. The microevolutionary consequences of climate change. Trends in Ecology and Evolution 5. 20. Holt, R. D. 1996a. Adaptive evolution in source-sink environments: Direct and indirect effects of density-dependence on niche evolution. Oikos 75: 182-192. 21. Holt, R. D. 1996b. Demographic constraints in evolution: Towards unifying the evolutionary theories of senescence and niche conservatism. Evolutionary Ecology 10:1-11. 22. Holt, R. D., and M. S. Gaines. 1992. Analysis of adaptation in heterogeneous landscapes: Implications for the evolution of fundamental niches. Evolutionary Ecology 6~433-447. 23. Holt, R. D., and R. Gomulkiewicz. 1996. The evolution of species' niches: A population dynamic perspective. Pages 25-50 in H. G. Othmer, F. R. Adler, M. A. Lewis, and J. C. Dallon, editors. Case Studies in Mathematical Modeling: Ecology, Physiology and Cell Biology. Prentice-Hall, Saddle River, N.J. 24. Houghton, J. T., Y. Ding, D. J. Griggs, M. Noguer, P. J. van der Linden, X. Dai, K. Maskell, and C. A. Johnson, editors. 2001. Climate Change 2001: The Scientific Basis. Cambridge University Press, Cambridge. 25. Huntley, B., P. M. Berry, W. Cramer, and A. P. McDonald. 1995. Modelling present and potential future ranges of some European higher plants using climate response surfaces. Journal of Biogeography 22:967-1001. 26. Hutchinson, G. E. 1957. Concluding remarks. Cold Spring Harbor Symposia on Quantitative Biology 22:415-427. 27. Inouye, D. W., B. Barr, K. B. Amitage, and B. D. Inouye. 2000. Climate change is affecting altitudinal migrants and hibernating species. Proceedings of the National Academy of Sciences USA 97:1630-1633. 28. Inouye, D. W., F. Saavedra, and W. Lee-Yang. 2003. Environmental influences on the phenology and abundance of flowering by Androsace septentrionalis (Primulaceae). American Journal of Botany 90:905-910. 29. Kawecki, T. J. 1995. Demography of source-sink populations and the evolution of ecological niches. Evolutionary Ecology 9:38-44. 30. Kawecki, T. J., and S. C. Steams. 1993. The evolution of life histories in spatially heterogeneous environments: Optimal reaction norms revisited. Evolutionary Ecology 7: 155-174. 31. Laut, P. 2003. Solar activity and terrestrial climate: an analysis of some purported correlations. Journal of Atmospheric and Solar-Terrestrial Physics 65801-8 12. 32. Linderholm, H. W. 2002. Twentieth-century Scots pine growth variations in the central Scandinavian Mountains related to climate change. Arctic Antarctic and Alpine Research 34:440-449. 33. MacArthur, R. 1972. Geographical Ecology. Princeton University Press, Princeton, N.J. 34. Martinez-Meyer, E. 2002. Evolutionary Trends in Ecological Niches of Species. Ph.D. dissertation. University of Kansas, Lawrence, Kansas. 35. Melillo, J. M., I. C. Prentice, G. D. Farquhar, E.-D. Schulze, and 0. E. Sala. 1996. Terrestrial biotic responses to environmental change and feedbacks to climate. Pages 444-481 in J. T. Houghton, L. G. Meira Filho, B. A. Callander, N. Harris,
66 A. Kattenberg, and K. Maskell, editors. Climate Change 1995: The Science of Climate Change. Cambridge University Press, Cambridge. 36. Midgely, G. E., L. Hannah, D. Millar, M. C. Rutherford, and L. W. Powrie. 2002. Assessing the vulnerability of species richness to anthropogenic climate change in a biodiversity hotspot. Global Ecology and Biogeography 11:445-451. 37. Midgley, G. F., L. Hannah, D. Millar, W. Thuiller, and A. Booth. 2003. Developing regional and species-level assessments of climate change impacts on biodiversity in the Cape Floristic Region. Biological Conservation 112:87-97. 38. Nadkami, N. M., and R. Solano. 2002. Potential effects of climate change on canopy communities in a tropical cloud forest: An experimental approach. Oecologia 131580-586. 39. Nakicenovic, N., and R. Swart, editors. 2000. Emissions Scenarios: A Special Report of Working Group I11 of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge, U.K. 40. Neilson, R. P., G. A. King, and G. Koerper. 1992. Toward a rule-based biome model. Landscape Ecology 7:27-43. 41. Neilson, R. P., I. C. Prentice, and B. Smith. 1998. Simulated changes in vegetation distribution under global warming. Pages 439-456 in R. T. Watson, M. C. Zinyowera, R. H. Moss, and D. J. Dokken, editors. The Regional Impacts of Climate Change: An Assessment of Vulnerability. Cambridge University Press, Cambridge. 42. Parmesan, C. 1996. Climate and species' range. Nature 382:765-766. 43. Parmesan, C., N. Ryrholm, C. Stefanescu, J. K. Hill, C. D. Thomas, H. Descimon, B. Huntley, L. Kaila, J. Kullberg, T. Tammaru, J. Tennent, J. A. Thomas, and M. Warren. 1999. Poleward shift of butterfly species' ranges associated with regional warming. Nature 399579-583. 44. Parmesan, C., and G. Yohe. 2003. A globally coherent fingerprint of climate change impacts across natural systems. Nature 421:37-42. 45. Perfors, T., J. Harte, and S. E. Alter. 2003. Enhanced growth of sagebrush (Artemisia tridentata) in response to manipulated ecosystem warming. Global Change Biology 9:736-742. 46. Peterson, A. T. 2003a. Projected climate change effects on Rocky Mountain and Great Plains birds: Generalities of biodiversity consequences. Global Change Biology 9:647-655. 47. Peterson, A. T. 2003b. Subtle recent distributional shifts in Great Plains endemic bird species. Southwestem Naturalist 48:289-292. 48. Peterson, A. T. In press. Predictability of the geography of species' invasions via ecological niche modeling. Quarterly Review of Biology. 49. Peterson, A. T., and R. D. Holt. In press. Niche differentiation in Mexican birds: Using point occurrences to detect ecological innovation. Ecology Letters. 50. Peterson, A. T., M. A. Ortega-Huerta, J. Bartley, V. Sanchez-Cordero, J. Soberon, R. H. Buddemeier, and D. R. B. Stockwell. 2002. Future projections for Mexican faunas under global climate change scenarios. Nature 416:626-629. 51. Peterson, A. T., V. Sanchez-Cordero, J. Soberon, J. Bartley, R. H. Buddemeier, and A. G. Navarro-Siguenza. 2001. Effects of global climate change on geographic distributions of Mexican Cracidae. Ecological Modelling 144:21-30.
67
52. Peterson, A. T., J. Soberon, and V. Sanchez-Cordero. 1999. Conservatism of ecological niches in evolutionary time. Science 285:1265-1267. 53. Peterson, A. T., H. Tian, E. Martinez-Meyer, B. Huntley, J. Soberon, and V. SAnchez-Cordero. In press. Modeling distributional shifts of individual species and biomes. in T. E. Lovejoy and L. Hannah, editors. Biodiversity and Climate Change. Yale University Press, New Haven, Conn. 54. Peterson, A. T., and D. A. Vieglais. 2001. Predicting species invasions using ecological niche modeling. BioScience 51:363-371. 55. Pope, V. D., M. L. Gallani, V. J. Rowntree, and R. A. Stratton. 2002. The impact of new physical parametrizations in the Hadley Centre climate model - HadAM3. Hadley Centre for Climate Prediction and Research, Bracknell, Berks, UK. 56. Pounds, J. A., and M. L. Crump. 1994. Amphibian declines and climate disturbance: The case of the golden toad and the harlequin frog. Conservation Biology 8:72-85. 57. Pounds, J. A., M. P. L. Fogden, and J. H. Campbell. 1999. Biological response to climate change on a tropical mountain. Nature 398:611-615. 58. Prentice, I. C., W. Cramer, S. P. Harrison, R. Leemans, R. A. Monserud, and A. M. Solomon. 1992. A global biome model based on plant physiology and dominance, soil properties and climate. Journal of Biogeography 19:117-134. 59. Prentice, I. C., and N. R. Webb. 1989. Developing a global vegetation dynamics model: Results of an IIASA summer workshop. RR-89-7, International Institute for Applied Systems Analysis, Laxenburg, Austria. 60. Rahmstorf, S. 2003. Timing of abrupt climate change: A precise clock. Geophysical Research Letters 30:art. no.-] 510. 61. Rice, N. H., E. Martinez-Meyer, and A. T. Peterson. In press. Ecological niche differentiation in the Aphelocoma jays: A phylogenetic perspective. Biological Joumal of the Linnaean Society. 62. Rosqvist, G. C., and P. Schuber. 2003. Millennial-scale climate changes on South Georgia, Southern Ocean. Quaternary Research 59:470-475. 63. Saavedra, F., D. W. Inouye, M. V. Price, and J. Harte. 2003. Changes in flowering and abundance of Delphinium nuttallianum (Ranunculaceae) in response to a subalpine climate warming experiment. Global Change Biology 9:885-894. 64. Siqueira, M. F. d., and A. T. Peterson. 2003. Global climate change consequences for cerrado tree species. Biota Neotropica In press. 65. Solanki, S. K., and N. A. Krivova. 2003. Can solar variability explain global warming since 1970? Journal of Geophysical Research-Space Physics 108:art. no.-1200. 66. Steemers, K. 2003. Towards a research agenda for adapting to climate change. Building Research and Information 31:291-301. 67. Tian, H., C. Hall, and Y. Qi. 1998. Modeling primary productivity of the terrestrial biosphere in changing environments: Toward a dynamic biosphere model. Critical Reviews in Plant Sciences 15541-557. 68. Walther, G.-R., E. Post, P. Convey, A. Menzel, C. Parmesan, T. J. C. Beebee, J.M. Fromentin, Hoegh-Guldberg, and F. Bairlein. 2002. Ecological responses to recent climate change. Nature 416:389-395.
68 69. Woodward, F. I., M. R. Lomas, and R. A. Betts. 1998. Vegetation-climate feedback in a greenhouse world. Philosophical Transactions of the Royal Society of London B 353:29-39. 70. Zacherl, D., S. D. Gaines, and S. I. Lonhart. 2003. The limits to biogeographical distributions: insights from the northward range extension of the marine snail, Kelletia kelletii (Forbes, 1852). Journal of Biogeography 30:913-924.
NEW FINGERPRINTS OF HUMAN EFFECTS ON CLIMATE B.D. SANTER Program for Climate Model Diagnosis and Intercomparison, Lawrence Livermore National Laboratory, Livermore, USA T.M.L. WIGLEY National Center for Atmospheric Research, Boulder, USA mTRODUCTION
In 1988, the Intergovernmental Panel on Climate Change (IPCC) was jointly established by the World Meteorological Organization and the United Nations Environment Programme. The goals of this panel were threefold: to assess available scientific information on climate change, to evaluate the environmental and societal impacts of climate change, and to formulate response strategies. The IPCC’s first major scientific assessment, published in 1990, concluded that “unequivocal detection of the enhanced greenhouse effect from observations is not likely for a decade or more” (Houghton et al., 1990, page xxix). Six years later, the IPCC’s second scientific assessment reached a more definitive conclusion regarding human impacts on climate, and stated that “the balance of evidence suggests a discernible human influence on global climate” (Houghton et al., 1996, page 4). This cautious sentence marked a paradigm shift in scientific understanding of the nature and causes of recent climate change. The shift arose for a variety of reasons. Chief amongst these was the realization that the cooling effects of anthropogenic sulfate aerosols had partially obscured the warming signal arising from increasing atmospheric concentrations of greenhouse gases (GHGs). A further major area of progress was the increasing use of so-called “fingerprint” studies, which involve detailed statistical comparisons of modeled and observed climate change patterns. “Fingerprinting” relies on the fact that each climate forcing mechanism (e.g., changes in solar irradiance, volcanic dust, sulfate aerosols, or GHG concentrations) has a unique pattern of response in climate records. Fingerprint studies have greatly enhanced our ability to diagnose cause and effect relationships in the climate system. The third and most recent IPCC assessment went one step further than its predecessor, and made an explicit statement about the magnitude of the human effect on climate. It concluded that “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities” (Houghton et al., 2001, page 4). This conclusion was based on improved estimates of natural climate variability, better reconstructions of temperature fluctuations over the last millenium, continued warming of the climate system, refinements in fingerprint methodology, and the use of results from more (and improved) climate models, driven by more accurate and complete forcing estimates. The “bottom line” conclusions of the IPCC’s second and third scientific assessments have been endorsed by other independent evaluations, most recently by the US. National Academy of Sciences WAS, 2001). The clear message from these assessments is that human activities have altered both the chemical composition of the
69
70
Earth’s atmosphere and the climate system. Changes in atmospheric composition are an immutable fact, not speculation. Evidence of such anthropogenic perturbations includes increases in well-mixed GHGs, decreases in stratospheric ozone, and changes in the atmospheric burdens of sulfate and soot aerosols. All of these atmospheric constituents interact with incoming solar and outgoing terrestrial radiation and their potential to modify the climate significantly is undeniable. Human-induced changes in their concentrations modify the “natural” radiative balance of Earth’s atmosphere, and therefore perturb climate. Despite the overwhelming scientific evidence of pronounced anthropogenic effects on climate, important uncertainties remain. The experiment that we are performing with the Earth’s atmosphere lacks a suitable control - we do not have a convenient “parallel Earth”, on which there are no human-induced changes in atmospheric chemistry. We must therefore rely on other sources of information to estimate how the Earth’s climate might have evolved in the absence of any human intervention. Such sources include numerical models and proxy data (e.g., tree rings, ice cores, corals, boreholes; Mann and Jones, 2003). Numerical models and paleoclimatic reconstructions will always have uncertainties - so there will always be uncertainties in our estimates of the climate of “undisturbed Earth”, and hence in the magnitude of human effects on climate. One criticism of the “discernible human influence” findings of previous international and national assessments is that they have relied almost exclusively on fingerprint studies involving changes in near-surface temperature. This was a natural focus of initial work, since we live at the Earth’s surface, and are directly affected by surface temperature changes. Instrumental records of near-surface temperature span nearly 150 years and have reasonable spatial coverage for much of this period, making it one of the best-observed climate variables. Furthermore, climate model results indicate that surface temperatures should warm markedly in response to increasing GHG levels, suggesting that it is a good variable to monitor for evidence of an anthropogenic signal. It is not surprising, therefore, that the first fingerprint studies relied heavily on surface temperature (early exceptions are Santer et al., 1996, and Tett et al., 1996, both of which looked at zonal-mean, vertical temperature profiles). More recent fingerprint work, however, has considered a variety of other climate variables, such as changes in ocean heat content (Barnett et al., 2001), Northern Hemisphere sea ice extent (Vinnikov et al., 1999), sea level pressure (Gillett et al., 2003), and tropospheric and stratospheric temperatures (Santer et al., 2003a). These studies illustrate that a human-induced climate change signal is identifiable in many different aspects of the climate system. Such internal consistency, manifest in both observed and simulated climate changes, effectively refutes criticism that IPCC “discernible human influence” conclusions rest on a single variable only. The height of the tropopause is another variable that may provide useful information about the causes of recent climate change (Santer et al., 2003b,c). The tropopause is a transition zone between the turbulently-mixed troposphere (where most weather takes place) and the more stably-stratified stratosphere. It can be defined in a variety of different ways, based on thermal, chemical, or dynamical properties of the atmosphere (Hoinka, 1998; Seidel et al., 2001). In the following, we investigate whether changes in the height of the tropopause provide a further “fingerprint” of human effects on climate.
71
DATA AND ANALYSIS METHODS We use a standard thermal definition of the tropopause to track changes in its height (World Meteorological Organization, 1957). This definition relies on lapse rates, which provide information about how temperature changes with increasing height (Figure 1). The algorithm that we employ to estimate tropopause height requires, as input, atmospheric temperature profiles (i.e., temperatures measured at discrete pressure levels above the Earth’s surface), and computes lapse rates from these profiles (Reichler et al., 1996). The algorithm identifies the level at which the lapse rate falls below a critical value of 2”C/km, and then remains less than this value for a specified vertical distance. The exact pressure at which the lapse rate attains this critical value is determined by linear interpolation. We refer to this pressure as PLRT (“pressure of lapse-Kate fropopause”). We calculate pLRT from both climate model and ‘observational’ temperature data. The model data are from the Department of Energy Parallel Climate Model (PCM), jointly developed by the National Center for Atmospheric Research and Los Alamos National Laboratory (Washington et al., 2000). PCM is a state-of-the-art, coupled atmosphere-ocean General Circulation Model (GCM), which has been used to perform a variety of different climate change experiments (Ammann et al., 2003). We analyze seven different experiments here? which differ in terms of the climate forcings included. The three anthropogenic forcings considered are changes in well-mixed greenhouse gases (G), the direct scattering effects of sulfate aerosols (A), and tropospheric and stratospheric ozone (0).The natural forcings are changes in total solar irradiance (S) and volcanic aerosols (V). In the first five experiments, only a single forcing was varied at a time - e.g., G was varied according to historical changes in GHGs, while A, 0, S, and V were held fixed at pre-industrial levels. In the sixth experiment (ALL), G, A, 0, S, and V were varied simultaneously. Only the natural forcings were changed in the final integration (SV). G, A, 0, and S commence in 1872, while V, SV, and ALL start in 1890. All seven experiments end in December 1999. To obtain more reliable estimates of the climate response to the imposed forcing change (i.e., to enhance the externally-forced signal relative to the ‘noise’ of internally-generated variability), four realizations of each experiment were performed. Each realization started from slightly different initial conditions. Since each has a different realization of the noise, averaging over all four approximately doubles the signal-to-noise ratio. For a full description of the experiments and forcings, see Santer et al. (2003~).A long (300 year) control integration was also performed, in which all five forcings were held fixed at pre-industrial values. This control run provides information about the unforced variability of the climate system (i.e., the internally-generated noise), which is a critical component of “fingerprint” detection work. Observational estimates of tropopause height were obtained from ‘reanalyses’, which are optimal combinations of numerical weather forecasts and observations (Pawson and Fiorino, 1999; Santer et al., 1999). Two different reanalysis products were used. The first (NCEP) was from the National Center for Environmental Prediction and the National Center for Atmospheric Research (Kalnay et al., 1996). The second (ERA) was performed by the European Centre for Medium-Range Weather Forecasts (Gibson et al., 1997). NCEP data were available from 1948 to 2001, but data before 1979 are
72
affected by serious inhomogeneities (Pawson and Fiorino, 1999; Santer et al., 1999) and were not used in our fingerprint analysis. ERA data were available for a much shorter period of time (1979 to 1993). As in the case of PCM data, PLRT was calculated from the NCEP and ERA atmospheric temperature profiles. Different climate forcing mechanisms have different effects on atmospheric temperature profiles, and hence on PLRT. Previous work has shown that PLRT is strongly influenced by temperature changes of the atmospheric layers above and below the tropopause (Santer et al., 2003b,c). To diagnose how different forcings affect PLRT,it is useful to calculate the average temperatures of the lower stratosphere and mid- to upper troposphere. In the real world, temperatures in these atmospheric layers are monitored by channels 4 and 2 of the satellite-based Microwave Sounding Unit (MSU). We calculated ‘equivalent’ MSU temperatures from PCM, NCEP, and ERA, using weighting functions to mimic how the MSU instrument samples the real-world atmosphere (Santer et al., 1999). ‘Equivalent’ MSU temperatures are referred to below as T4 (lower stratosphere) and T2 (mid to upper troposphere). GLOBAL-MEAN CHANGES
In both the PCM ALL and SV experiments, there is a small overall increase in tropopause height from 1890 to roughly 1965. This increase corresponds to an overall decrease in the pressure of the tropopause: i.e., PLRT declines by -1-2 hPa (Figure 2). From 1965 to 1999, tropopause height increases markedly in ALL, which includes anthropogenic forcings, but not in SV, which has solar and volcanic forcing only. The simulated height increase in ALL is in agreement with the global-mean height changes in both reanalyses (Figure 2)’. The multiple realizations of ALL and SV provide information about uncertainties in the climate response to the imposed forcings. These uncertainties arise from the inherently unpredictable noise of internally-generated variability. The four realizations of ALL and of SV define two different ‘envelopes’ of possible climate trajectories. These two envelopes clearly diverge in the 1980s. In ‘PCM world’, it is clear that the increase in tropopause height over the last two decades of the 20th century could not be due to the combined effects of natural internal variability and forcing by the Sun and volcanoes alone. Superimposed on these multi-decadal changes in tropopause height are large, shortterm (2-3 year) height decreases associated with the eruptions of Santa Maria (in 1902), Agung (in 1963), El Chich6n (in 1982) and Pinatubo (in 1991). These volcanic signals are large relative to the internal ‘noise’ of PLRT in volcanically-quiescent periods. The volcanic tropopause height signals in PCM are larger than in NCEP and ERA, for reasons discussed in Santer et al. (2003~). Figure 3 provides a simple conceptual model for interpretation of these low- and high-frequency PLRT changes. Calculations performed with simple radiative convective models and more complex atmospheric GCMs show that increases in atmospheric C02 cool the stratosphere and warm the troposphere (e.g., Hansen et al., 1984; Manabe and Wetherald, 1987). Both of these changes tend to increase tropopause height. Depletion of stratospheric ozone also causes a net increase in tropopause height through strong cooling of the stratosphere.” In contrast, volcanic aerosols injected into the stratosphere absorb
73 incoming solar radiation and outgoing longwave radiation, thus warming the stratosphere and cooling the troposphere. Both of these changes decrease tropopause height. The previously-described PLRT changes in PCM, NCEP, and ERA are qualitatively consistent with this simple conceptual model. CONTRIBUTIONS OF INDIVIDUAL FORCINGS TO TROPOPAUSE HEIGHT CHANGES The PCM "individual forcing" experiments help to isolate the effects of different forcings on the atmospheric temperature profile, and hence on tropopause height. From the ensemble-means of G, A, 0, S, V, and ALL, we first calculated zonal-mean, monthlymean anomalies,"' and then computed the total linear changes" in T4, T2, and PLRT over 1900-1999. Consider first the results for stratospheric temperature changes (Figure 4A). ALL is characterized by coherent cooling of the stratosphere over the 20th century. This cooling is smallest in the tropics and largest at high latitudes in the Southern Hemisphere. Stratospheric ozone depletion is the major contributor to the T4 changes in ALL, consistent with findings by Ramaswamy et al. (1996). Ozone forcing also influences the hemispheric asymmetry in ALL's T4 changes,v since ozone-induced cooling is largest at high latitudes in the Southern Hemisphere, where stratospheric ozone decreases are greatest. The stratospheric temperature response to well-mixed GHGs has a similar (but smaller) hemispheric asymmetry. The total linear changes in T4 caused by A, S, and V are much smaller than those arising from 0 and G. The direct (scattering) effect of anthropogenic sulfate aerosols yields a small net cooling of the stratosphere over 1900-1999, while the assumed increase in solar irradiance in the S experimentVileads to a slight warming of the stratosphere.vii Volcanic aerosols also cause a small overall warming of the stratosphere, primarily due to the occurrence of two large eruptions near the end of the century. The troposphere warms at most latitudes in the ALL case, with maximum warming in the tropics (Figure 4B).v"1 Well-mixed GHGs are the major contributor to this warming. The positive contribution of G is partly offset by the small net tropospheric coolings caused by sulfate aerosols, ozone, and volcanoes. Note that the T2 changes induced by sulfate aerosols have pronounced hemispheric asymmetry, with maximum tropospheric cooling in the Northern Hemisphere (where anthropogenic sulfate aerosol forcing is largest). Although the net effect of ozone changes is to cool T2, low-latitude increases in tropospheric ozone warm the tropical troposphere. Changes in solar irradiance also yield a small warming of T2. Figure 4C illustrates that ozone and well-mixed GHGs are the dominant influences on the tropopause height changes in ALL. Both forcings increase the height of the tropopause at all latitudes. The hemispheric asymmetry in ALL's height changes (with largest height increases at high latitudes in the Southern Hemisphere) is primarily due to hemispheric asymmetries in the T4 responses to 0 and G (Figure 4A). As expected on the basis of our conceptual model (Figure 3), A and V produce slight decreases in tropopause height (increases in PLRT).Since forcing by S warms both T4 and T2, with offsetting effects on PLRT,the net effect of S depends on the relative magnitudes of stratospheric and tropospheric warming. In PCM, tropospheric warming predominates, and the net effect of S is a small increase in tropopause height.
74
Figure 5 summarizes the effects of different forcings on the global-mean changes in tropopause height, T4, and T2. It shows the total linear changes in these three variables for the ALL experiment and the five individual forcing cases (G, A, 0, S, and V). Over 1900 to 1999, anthropogenic forcing by ozone and well-mixed GHGs explains over 80% of the overall height increase in ALL (Figure 5A; see also Santer et al., 2003~).The ozone effect on pLRT is manifest primarily through cooling of the stratosphere (Figure 5B). The influence of well-mixed GHGs occurs mostly through warming of the troposphere (Figure 5C). In PCM, natural external forcings make only a small contribution to the simulated 20thcentury changes in tropopause height and atmospheric temperatures. FINGERPRINT ANALYSIS We have shown that there is consistency between the global-mean tropopause height changes in reanalyses and the PCM experiment with combined natural and anthropogenic forcing (Figure 2). Results from the SV experiment indicate that this global-mean correspondence could not be achieved by natural forcing only. Global-mean agreement alone, however, does not provide compelling evidence of causality. A more reliable way of studying cause-effect relationships involves comparison of modeled and observed spatial patterns. As noted previously, the expected pattern of tropopause height change in response to combined natural and anthropogenic forcing (the “fingerprint”) has pronounced meridional structure and hemispheric asymmetry (Figure 4C). These largescale features are physically interpretable: they are related to the characteristic patterns of zonal-mean atmospheric temperature change arising from forcing by 0 and G (Figure 4A,B). Identification of the PCM ALL fingerprint pattern in reanalyses would enable us to attribute observed tropopause height changes to the combined effects of anthropogenic and natural forcing. We use a standard “fingerprinting” technique (Hasselmann, 1979). The fingerprint is obtained from the PCM ALL ensemble mean, and therefore represents the expected climate-change response to combined forcing by G, A, 0, S, and V. Our strategy is to search for an increasing expression of f in the reanalysis tropopause height data, and to estimate the “detection time” - the time at which the fingerprint becomes consistently identifiable gj a stipulated 5% significance level. We use both “raw” and “optimized” versions of f , referred to in Figure 6 as “RAW’ and :OPT”. Optimization is a standard statistical procedure that enhances the detectability of f by rotating the fingerprint away from high noise components.” We also perform the fingerprint analysis with and without the global-mean component of tropopause height change. Removal of global-mean changes focuses attention on smaller spatial scales, and constitutes a more rigorous test of modelldata pattern similarity. Fingerprinting relies on estimates of the internally-generated climate noise, which are obtained here from 300-year control runs (with no changes in external forcings) performed with PCM and the ECHAMWOPYC model (ECHAM) of the Max-Planck Institute for Meteorology in Hamburg (Roeckner et al., 1996). The use of control runs from two different models accounts for uncertainties in model-based estimates of internal variability. We note that our fingerprint test is performed with full 1atitudeAongitude patterns of tropopause height change rather than the zonal-mean patterns shown in Figure
7
75 4C. The full 1atitudeAongitude patterns are shown in Santer et al. (2003c), which also provides further technical details of the fingerprinting procedure. In our fingerprint method, the assumed “start date” for monitoring tropopause height changes is 1979 (the beginning of both the ERA data and of the more reliable portion of the NCEP reanalysis). We need a miEimum- of 10 years of reanalysis data in which to search for the model-based fingerprint f . If f cannot be detected in the first 10 year “chunk” of NCEP or ERA (i.e., in data from 1979-1988), we add an additional year of data (1989), and repeat the detectionprocedure. This extension of the analysis period continues until we have either detected f , or the reanalysis data end (in 2001 for NCEP, and 1993 for ERA). When the global-mean is included, the ALL tropopause height fingerprint can be identified in both NCEP and ERA (Figure 6). Detection of the “,W’ fingerprint occurs in 1988 - the earliest possible detection date. Optimization of f cannot improve this result (which is already the best possible), When the global-mean is removed, and attention focuses on smaller spatial scales, f can be identified in the NCEP data only, and at a later date (1995) than in the “mean included” case. This suggests that the ERA data (which ends in 1993) is simply too short to identify the “mean removed” fingerprint. Our detection results are relatively insensitive to the choice of model control run used to estimate natural variability noise or optimize the fingerprint. CONCLUSIONS
In evaluating how well a novel has been crafted, it is important to look at the internal consistency of the plot. Critical readers examine whether the individual storylines are neatly woven together, and whether the internal logic makes sense. We can ask similar questions about the “story” contained in observational records of climate change. The evidence from previous fingerprint studies indicates that the climate system provides us with an internally consistent account of the causes of recent climate change. Over the last century, the Earth’s oceans and land surface have warmed (Bamett et al., 2001; Jones et al., 1999). Glaciers have retreated over most of the globe. Sea level has risen. Snow and sea-ice extent have decreased in the Northern Hemisphere (Vinnikov et al., 1999). The stratosphere has cooled, and there are now reliable indications that the troposphere has warned (Mears et al., 2003; Santer et al., 2003a). All of these changes are consistent with our scientific understanding of how the climate system should be responding to anthropogenic forcing. They are not consistent with the changes that we would expect to occur due to natural forcings only. Changes in tropopause height are yet another piece of the climate-change puzzle. Other studies have documented an increase in the height of the tropopause in radiosondes and reanalyses (Highwood et al., 2000; Randel et al., 2000; Seidel et al., 2001). Our work is the first to illustrate that observed changes are consistent with climate model results (Santer et al., 2003b,c). This consistency holds for both global-mean increases, as well as for more complex spatial patterns of tropopause height change. We find that the simulated increase in tropopause height is mainly due to two anthropogenic factors: the effects of stratospheric ozone depletion (which cools the stratosphere) and increases in well-mixed greenhouse gases (which warm the troposphere). effects are necessary to explain the observed height increases.
76
Our results strengthen the scientific case for a discernible human impact on global climate. We note, however, that there are still significant uncertainties in climate models, in observations, in estimates of climate forcings, and in our physical understanding of the climate system. The difficult problem that confronts us - as citizens of this planet - is how to act in the face of both scientific uncertainty and growing scientific evidence that our actions are altering global climate. We should be very clear about one point. The decisions we reach today will influence the climate that future generations inherit. REFERENCES 1. 2. 3. 4.
5.
6. 7. 8. 9. 10. 11. 12. 13. 14.
Ammann, C.M., G.A. Meehl, W.M. Washington, and C.S. Zender, 2003: A monthly and latitudinally varying forcing dataset in simulations of 20th century climate. Geophys. Res. Lett., 30, 1657, doi:lO. 1029/2003GL016875. Bamett, T.P., D.W. Pierce, and R. Schnur, 2001: Detection of anthropogenic climate change in the world’s ocean. Science, 292,270-274. Gibson, J.K., P. Killberg, S. Uppala, A. Hernandez, A. Nomura and E. Serrano, 1997: ECMWF Re-Analysis Project Report Series. 1. ERA Description. 66 pp. Gillett, N.P., F.W. Zwiers, A.J. Weaver, and P.A. Stott, 2003: Detection of human influence on sea-level pressure. Nature, 422,292-294. Hansen, J., A. Lacis, D. Rind, L. Russell, P. Stone, I. Fung, R. Ruedy, and J. Lemer, 1984: Climate sensitivity: Analysis of feedback mechanisms. In: Climate Processes and Climate Sensitivity (Eds. J. Hansen and T. Takahasi). Maurice Ewing Series 5, Geophysical Monograph 29, 130-163. American Geophysical Union, Washington D.C. Hasselmann, K., 1979: In: Meteorology of Tropical Oceans (Ed. D.B. Shaw). Royal Meteorological Society of London, London, U.K., pp. 251-259. Highwood, E.J., B.J. Hoskins, and P. Berrisford, 2000: Properties of the Arctic tropopause. Q, J. R. Meteorol. SOC.,126, 1515-1532. Hoinka, K.P., 1998: Statistics of the global tropopause pressure. Mon. Weath. Rev., 126,3303-3325. Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson, 2001: Climate Change 2001: The Scientific Basis. Cambridge University Press, Cambridge, U.K., 88 1 pp. Houghton, J.T., L.G. Meira Filho, B.A. Callander, N. Harris, A. Kattenberg, and K. Maskell, 1996: Climate Change 1995: The Science of Climate Change. Cambridge University Press, Cambridge, U.K., 572 pp. Houghton, J.T., G.J. Jenkins, and J.J. Ephraums, 1990: Climate Change. The IPCC Scientific Assessment. Cambridge University Press, Cambridge, U.K., 365 pp. Hop, D.V., and K.H. Schatten, 1993: A discussion of plausible solar irradiance variations, 1700-1992. J. Geophys. Res., 98, 18895-18906. Jones, P.D., M. New, D.E. Parker, S. Martin, and I.G. Rigor, 1999: Surface air temperature and its changes over the past 150 years. Rev. Geophys., 37, 173-199. Kalnay, E., M. Kanamitsu, R. Kistler, W. Collins, D. Deaven, L. Gandin, M. Iredell, S. Saha, G. White, J. Woolen, Y. Zhu, M. Chelliah, W. Ebisuzaki, W. Higgins, J. Janowiak, K.C. Mo, C. Ropelewski, J. Wang, A. Leetma, R. Reynolds, R. Jenne and
77
15. 16. 17. 18. 19. 20.
21. 22.
23. 24.
25.
26.
27.
28.
D. Joseph, 1996: The NCEPmCAR 40-year reanalysis project. Bull. Amer. Meteor. SOC.,77, 437-471. Manabe, S., and R.T. Wetherald, 1987: Large scale changes of soil wetness induced by an increase in atmospheric carbon dioxide. J. Atmos. Sci., 44, 1211-1235. Mann, M.E., and P.D. Jones, 2003: Global surface temperatures over the past two millennia. Geophys. Res. Lett., 30, 1820, doi:l0.1029/2003GL017814. Mears, C.A., M.C. Schabel, and F.W. Wentz, 2003: A reanalysis of the MSU channel 2 tropospheric temperature record. J. Climate (in press). National Academy of Sciences, 2001: Climate Change Science. An Analysis of Some Key Questions. National Academy Press, Washington D.C., 29 pp. Pawson, S. and M. Fiorino, 1999: A comparison of reanalyses in the tropical stratosphere. Part 3: Inclusion of the pre-satellite data era. Clim. Dyn. 15, 241-250. Ramaswamy, V., M.D. Schwarzkopf and W.J. Randel, 1996: Fingerprint of ozone depletion in the spatial and temporal pattern of recent lower-stratospheric cooling. Nature, 382, 616-618. Randel, W.J., F. Wu, and D.J. Gaffen, 2000: Interannual variability of the tropical tropopause derived from radiosonde data and NCEP reanalyses. J. Geophys. Res., 105, 15509-15523,2000. Reichler, T., M. Dameris, R. Sausen, and D. Nodorp, 1996: A global climatology of the tropopause height based on ECMWF analyses. Institut f i r Physik der Atmosphare, Report No. 57, Deutsche Forschungsanstalt f i r Luft und Raumfahrt, 23 PP. Roeckner, E., L. Bengtsson, J. Feichter, J. Lelieveld, and H. Rodhe, 1999: Transient climate change simulations with a coupled atmosphere-ocean GCM including the tropospheric sulfur cycle. J. Climate, 12, 3004-3032, 1999. Santer, B.D., K.E. Taylor, T.M.L. Wigley, T.C. Johns, P.D. Jones, D.J. Karoly, J.F.B. Mitchell, A.H. Oort, J.E. Penner, V. Ramaswamy, M.D. Schwarzkopf, R.J. Stouffer, and S. Tett, 1996: A search for human influences on the thermal structure of the atmosphere. Nature, 382, 39-46. Santer, B.D., J.J. Hnilo, J.S. Boyle, C. Doutriaux, M. Fiorino, D.E. Parker, K.E. Taylor, and T.M.L. Wigley, 1999: Uncertainties in observationally-based estimates of temperature change in the free atmosphere. J. Geophys. Res., 104, 6305-6333. Santer, B.D., T.M.L. Wigley, G.A. Meehl, M.F. Wehner, C. Mears, M. Schabel, F.J. Wentz, C. Ammann, J. Arblaster, T. Bettge, W.M. Washington, K.E. Taylor, J.S. Boyle, W. Briiggemann, and C. Doutriaux, 2003a: Influence of satellite data uncertainties on the detection of externally-forced climate change. Science, 300, 1280-1284. Santer, B.D., R. Sausen, T.M.L. Wigley, J.S. Boyle, K. AchutaRao, C. Doutriaux, J.E. Hansen, G.A. Meehl, E. Roeckner, R. Ruedy, G. Schmidt, and K.E. Taylor, 2003b: Behavior of tropopause height and atmospheric temperature in models, reanalyses, and observations: Decadal changes. J. Geophys. Res., 108, 4002, doi: 10.1029/2002JD002258. Santer, B.D., M.F. Wehner, T.M.L. Wigley, R. Sausen, G.A. Meehl, K.E. Taylor, C. Ammann, J. Arblaster, W.M. Washington, J.S. Boyle, and W. Bruggemann, 2003c: Contributions of anthropogenic and natural forcing to recent tropopause height changes. Science, 301,479-483.
78
29. Seidel, D.J., R.J. Ross, J.K. Angell, and G.C. Reid, 2001: Climatological characteristics of the tropical tropopause as revealed by radiosondes. J. Geophys. Res., 106,7857-7878. 30. Shaviv, N.J., and J. Veizer, 2003: Celestial driver of Phanerozoic climate? GSA Today, 13,4-10. 31. Tett, S.F.B., J.F.B. Mitchell, D.E. Parker and M.R. Allen, 1996: Human influence on the atmospheric vertical temperature structure: Detection and observations. Science, 274, 1170-1173. 32. Vinnikov, K.Y., A. Robock, R.J. Stouffer, J.E. Walsh, C.L. Parkinson, D.J. Cavalieri, J.F.B. Mitchell, D. Garrett, and V.F. Zakharov, 1999: Global warming and Northern Hemisphere sea ice extent. Science, 286, 1934-1937. 33. Washington, W.M., J.W. Weatherly, G.A. Meehl, A.J. Semtner Jr., T.W. Bettge, A.P. Craig, W.G. Strand, J. Arblaster, V.B. Wayland, R. James and Y . Zhang, 2000: Parallel Climate Model (PCM) control and transient simulations. Clim. Dyn., 16, 755-774. 34. Wigley, T.M.L., B.D. Santer, J.M. Arblaster, C. Ammann, G.A. Meehl, and M.F. Wehner, 2003: Testing for additivity in climate model responses to external forcing: The effect of model drift.In preparation. 35. World Meteorological Organization (WMO), 1957: Meteorology: A threedimensional science. Second Session of the commission for aerology. WMO Bull., 6(4), 134-138.
79
Temperature (Kelvin)
Figure 1.: Typical low-latitude atmospheric temperature profile in the NCEP reanalysis. Results are climatological means over 1979-1997, and have been averaged from the equator to 30"N. The estimated tropopause pressure, PLRT,was computed with the algorithm of Reichler et al. (1996), and is indicated by the dashed horizontal line.
80
Global Mean Changes in Tropopause Pressure A
-5 m m0)
6E
m
h
i
R
5 x
3
E 2
FJ I
0 "Observations"(NCEP) PCM SV (natural forcings only) PCM ALL (natural + anthropogenicforcings) m m
5
g! U 0
i V
1900
1920
1940
1960
2000
Figure 2.: Time series of global-mean monthly-mean anomalies in tropopause pressure, PLRT. Model results are from the PCM experiments with combined natural and anthropogenic forcing (ALL), and with natural forcing only (SV). There are four realizations of each experiment. The shaded areas mark the range between the highest and lowest values of the realizations. Darker shading denotes ALL results; lighter shading represents SV. The ensemble-mean values of ALL and SV are given by thin solid and thin dashed lines (respectively). All model PLRT anomalies were defined relative to climatological monthly means computed over 1890-1909. Estimates of 'observed' PLRT are from the NCEP reanalysis (thick solid line). Pre-1960 NCEP data were ignored because of deficiencies in the coverage and quality of assimilated radiosonde data. NCEP was forced to have the same mean as ALL over 1960-1999. Both PCM and NCEP results were low-pass filtered. Note that an increase in tropopause height corresponds to a decrease in PLRT.
PI
81
- Baseline temperature profile Cooling - - - - - -- Warming
Figure 3.: Conceptual model for the effect of three different forcings on tropopause height. The solid black lines are the ‘baseline’ atmospheric temperature profiles. The forcings shown here - depletion of stratospheric ozone, increasing atmospheric COz, and explosive volcanic eruptions - perturb this base state. The effect of the first two forcings is to increase tropopause height (indicated by the upward-pointing arrows), while volcanic aerosols decrease the height of the tropopause. The actual temperature perturbations associated with these forcings are more complex as a function of both altitude and latitude than the idealized changes illustrated here (Santer et al., 2003b).
82
Zonal Mean Changes in T4, T2, and pLRTin PCM 0.5 0 -0.5 0 ,0
d
-1 -1.5
-2 -2.51 A -3
T4
0.8 0.6 0.4 0.2 ,0 0 5
-0.2 -0.4 -0.6 0 h
(d
a 5 -5 k
g -10
d
-15
90
C
Tropopause height 60
30
0 Latitude
-30
-60
-90
Figure 4.: Zonal-mean changes in stratospheric temperature (T4; panel A), tropospheric temperature (T2; panel B) and tropopause height ( p ~ p , ;~panel C). Results are expressed as the total linear changes over 1900-1999, and were computed using ensemble-mean data from the PCM G, A, 0, S, V, and ALL experiments.
83
Tropopause Height and Temperature Changes in PCM Contributions over 1900-1999
0.8 0.6 0.4 5 0.2
c O -
z -E .-
O
3 -0.2 2 -0.4
€2 G (well-mixed GHGs) UUIU A (sulfate aerosols)
0 (ozone) E X S (solar)
V (volcanoes) I All forcings E 3 SUM (G+A+O+S+V)
Figure 5.: Total linear changes in global-mean, monthly-mean tropopause height (A), stratospheric temperature (B), and tropospheric temperature (C) in PCM experiments with individual forcings (G, A, 0, S, and V) and combined natural and anthropogenic forcings (ALL). Linear changes were calculated for the period 1900-1999 using ensemble-mean data. Anomalies were defined relative to climatological monthly means computed over 1900-1999. “SUM” is the sum of the linear changes in the five experiments with individual forcing.
84
Mean included I
I
€3
Mean removed
I
Observations m]
-I
I
Natural variability
NGEP
PGM
NCEP
ECHAM
ERA
PCM
ERA
EGHAM
7, 7
I
Figure 6.: Detection times for the tropopause height “fingerprint” simulated in the PCM ALL experiment. Here is the (1atitudeAongitude) pattern of tropopause height change in response to combined natural and anthropogenic forcing, and is defined as described in Santer et al. (2003~).The fingerprint is searched for in two different ‘observational’ datasets (the NCEP and ERA reanalyses). Both raw and optimized versions of the fingerprint are employed (RAW and OPT). The left (right) side of the figure shows detection results achieved when the global-mean component of tropopause height change is included (removed). Control runs from two different models (PGM and ECHAM) are used for estimating natural variability statistics and optimizing f .The presence of a bar indicates positive detection of the fingerprig. The longer the bar, the earlier the detection date. The absence of a bar means that f could not be identified before the end of the reanalysis (2001 for NCEP, 1993 for ERA).
8 ‘ It is also in agreement with the changes in ERA (not shown). Depletion of stratospheric ozone cools both the stratosphere and the troposphere. These changes have effects of opposite sign on tropopause height. The stratospheric cooling influence predominates, so the net effect of ozone depletion is to raise tropopause height (Figure 3). Relative to climatological monthly means over 1900-1999. I” The total linear change is expressed as b x n, where b is the slope parameter of the leastsquares linear trend (in hPa/month or “C/month),calculated over n months. v Note that there is residual climate drift in the PCM control run. This drift is related to the behavior of sea ice. It is not hemispherically symmetric, and has a pronounced signature in near-surface temperature fields (Wigley et al., 2003). Since this residual climate drift was not subtracted from the PCM perturbation experiments analyzed here, it influences the results shown in Figure 4, and imparts some hemispheric asymmetry to the estimated linear changes. This “drift contamination” effect is largest close to the Earths surface, and is probably small for T4 changes. vl Total solar irradiance changes were prescribed according to Hoyt and Schatten (1993). There was no wavelength dependence of the forcing. vii Recently, Shauiu and Veizer (2003) have argued that solar-induced changes in cosmic ray flux influence the behavior of low-level tropical clouds, and that this mechanism might explain nearly two-thirds of observed near-surface temperature changes over the past 150 years. This argument cannot explain the pronounced cooling of the extratropical stratosphere seen in radiosondes and satellite-based MSU measurements (see, e.g., Santer et al., 1999, 2003a). Cosmic ray-induced amplification of the direct effect of solar irradiance changes (direct irrahance changes are the only solar forcing employed here) would also lead to marked discrepancies between modeled and observed T2 changes. viu Note that T2 cools a t high latitudes in the Southern Hemisphere. T h s occurs some of the strong stratospheric cooling in this region is sampled by the T2 weighting function used to compute an ‘equivalent’ MSU temperature. ix Here, this rotation is performed in the space of the first 15 Empirical Orthogonal Functions (EOFs) of the PCM and ECHAM control runs.
PALEOCLIMATE IMPLICATIONS FOR RECENT HUMAN INFLUENCE ON CLIMATE MICHAEL E. MANN Department of Environmental Sciences, University of Virginia Charlottesville, USA Documenting past temperature changes is of particular importance in placing recent climate change in an appropriate long-term context. Because the fundamental boundary conditions on the climate (the parameters of the earth's orbit relative to the sun, and global patterns of vegetation and continental ice cover) have not changed appreciably over the past one-to-two millennia, the variations that have occurred in climate during this period are likely representative of the range of natural climate variability that might be expected in the absence of any human influence on climate. Placing modem (20th21st century) global warming in this longer-term context can thus aid in our ability to determine the role of anthropogenic factors, such as human greenhouse gas concentration increases, in more recent climate changes. Instrumental meteorological records can only provide a widespread direct indication of climate changes over roughly the past century. Prior to that, relatively few instrumental records are available (Jones et al, 1999). However, a number of distinct approaches exist for describing climate variations over a timeframe of roughly the past millennium. Among these is the use of theoretical models of the climate system that are driven with estimated changes in external parameters or 'forcings' such as greenhouse gas concentrations and solar output, which tend to warm the climate, and the fiequency and intensity of explosive volcanic eruptions which cool the climate through injecting reflective sulphate aerosol into the atmosphere (Crowley, 2000; Gerber et al, 2002; Bauer et al, 2003) This approach to reconstructing the past temperature history is limited by the imperfectly known history of these changes in forcing, which are typically estimated indirectly from trapped gas, radioisotopes and volcanic dust signals left behind in ice cores (Crowley, 2000). Moreover, the reconstruction is limited by any uncertainties in the model's representation of actual climate processes and responses. Finally, the approach only indicates the forced component of climate change; it cannot estimate the possible role of internal dynamics (e.g. natural changes in ocean circulation) in the actual climate changes in past centuries. Human documentary evidence provides another source for reconstructing climate in past centuries (see e.g. Wigley et al, 1981; Bradley, 1999). Records of frost dates, droughts, famines, the freezing of water bodies, duration of snow cover, and phenological evidence (e.g. the dates of flowering of plants, and the ranges of various species of plants) can provide insight into past climate conditions. Human accounts of mountain glacier retreats and advance during past centuries, diary or anecdotal evidence of weather conditions, and even a hand full of several centuries-long thermometer measurements, furthermore, are available in Europe for more than 1000 years. Plentiful human documentary evidence is limited, however, to those regions brimarily Europe and Eastern Asia) where a written tradition existed. They are thus not useful for documenting hemispheric, let alone global-scale climate variations, several centuries into the past. It is also important to note that human documentary records must be interpreted with caution,
86
87
as they are not equivalent in their reliability to actual instrumental measurements of meteorological variables. Typically, these records provide indirect and potentially biased snapshots in time of climate-related phenomena. Geological evidence from terminal glacial moraines indicating the advance of mountain glaciers in past centuries can also provide inferences into past climate changes. However, owing to the complex balance between local changes in melting and ice accumulation, and the effects of topography, all of which influence mountain glacier extents, it is difficult to ascertain the true nature of temperature changes simply from evidence of retreat of mountain glaciers (e.g. Folland et al, 2001). Both increased winter precipitation and cooler summer temperatures can lead to the growth of a glacier, and large or even moderate glaciers respond slowly to any underlying climate changes. Estimates of long-term ground temperature trends from temperatures profiles retrieved from terrestrial borehole data over the globe (Huang et al, 2000) can also provide complementary large-scale information regarding ground surface temperature trends in past centuries, but impacts of changes in seasonal snow cover, land-surface changes, and other factors appear to yield biased estimates of past surface air temperature changes from such data (Mann et al, 2003). A more quantifiable source of information on climate changes in past centuries is provided by 'proxy' indicators of climate change - natural archives of information which, by their biological or physical nature, record climate-related phenomena (Bradley, 1999; Folland et al, 2001). Certain proxy indicators, including most sediments cores, ice cores, and preserved pollen do not have the capability to record climate changes at high temporal resolution, and are thus generally not useful for reconstructing climate changes over the past several centuries. However, high-resolution (annually-resolved) proxy climate records such as growth and density measurements from tree rings, laminated sediment cores, annually resolved ice cores, and isotopic information from corals can be used to describe year-to-year patterns of climate in past centuries (Bradley and Jones, 1993; Mann et al, 1998; 1999; Jones et al, 1998; Crowley and Lowery, 2000; Briffa et al, 2001). These indirect measurements of climate change vary considerably in their reliability as indicators of long-term climate (varying in the degree of influence by nonclimatic effects, the seasonal nature of the climate information recorded, and the extent to which the records have been "verified" by comparison with independent data). However, when taken together, and particularly, when combined with historical documentary information and the few long instrumental climate records available, they can provide a meaningll annually-resolved documentation of large-scale climate changes in past centuries, including patterns of surface temperature change, drought, and atmospheric circulation. Global assemblages of such high-resolution proxy indicators can be related to the modem instrumental surface temperature record, and then used to yield reconstructions of large-scale surface temperature patterns over several centuries, back over roughly the past millennium. The detailed spatial patterns of these reconstructions can often provide insights into the underlying climate changes. Hemispheric or global mean surface temperature estimates can be derived by spatially averaging over such estimates of past large-scale patterns of surface temperature change. It is most meaningful to estimate the average surface temperature changes for the Northern Hemisphere, where considerably more widespread past data is available. Different estimates of Northern Hemisphere
8 8 temperature changes over the past millennium have been made, using different sources of proxy data, some of which are representative of the full Northern Hemisphere, others of which are more indicative of extra-tropical regions of the Northern Hemisphere. The various proxy and model-based hemispheric temperature estimates, taken on the whole, indicate relatively modest variations in temperature over the past 1000 years, prior to the pronounced 20th century warming (Figure 1). At the full hemispheric scale, temperatures appear to have been slightly warmer (a couple of tenths of a degree C) during the period AD 1000-1400 than during the later, colder period AD 1400-1900. There is, however, no other multi-decadal period during the past millennium that is comparable, in warmth, to the latter 20th century. In fact, this conclusion likely extends to at least the past two millennia (Briffa and Osborn, 1999; Yang et al, 2002), though the fewer available proxy data lead to considerably greater uncertainty in reconstructions prior to the past thousand years. Some extra-tropical summer temperature reconstructions (Esper et al, 2002) suggest wider swings in temperature, including greater cooling during the 17th-19th century than is evident in either the instrumental, model, or proxy-based estimates. The greater variability in this case probably arises from the emphasis on extratropical continental regions (Mann, 2002). Regional variations in surface temperature in past centuries appear to have often been of greater amplitude than those for hemisphere mean, but were often not synchronous. In fact, offsetting patterns of cooling and warming in different regions in past centuries (e.g. Bradley and Jones, 1993; Hughes and Diaz, 1994; Crowley and Lowery, 2000) yield far more modest variations evident in hemispheric mean temperature. For this reason, regional evidence can often provide a very misleading picture of large-scale climate changes. While European temperatures indicate a distinct cold phase from the 17th-19th centuries for example (the so-called "Little Ice Age" of Europe--e.g. Lamb, 1965), with average winter temperatures during the coldest decades of the 17th century probably 2' C or colder relative to late 20th century temperatures (e.g. Shindell et al, 2001), the trend in Northern Hemisphere mean temperature (i.e. Figure 1) shows a far smaller, and more steady, long-term pattern of cooling prior to the 20th century. It is probable that the greater cooling during the "Little Ice Age" of Europe was associated with a regional enhancement of cooling by a tendency for the 'negative phase' of an atmospheric circulation pattern, the 'North Atlantic Oscillation' ("AO'), that is associated with a southward displacement in the mean position of the winter jet stream over the North Atlantic ocean and neighboring continental regions. Recent evidence suggests that the NAO can be altered, for example, by changes in the output of the sun (Shindell et al, 2001). Such changes may have led to enhanced cooling in Europe during the 18th century, when solar output was slightly weaker, and winter atmospheric circulation changes favored strong cooling in Europe, but much weaker cooling for the Northern Hemisphere on the whole (Shindell et al, 2001). The complexity of such regional variations in temperature trends underscores the principle that it is perilous to draw conclusions regarding hemispheric or global temperature trends from isolated regional information. While some challenges remain in reducing uncertainties in the regional details of past climate change, the conclusion that late 20th century hemispheric-scale warmth is anomalous in a long-term (at least millennial) context, and that anthropogenic factors
8 likely play an important role in explaining the anomalous recent warmth, represents the consensus view of the climate research community (Folland et al, 2001).
1000 1100 1200 1300 1400 1500 1600 1700 1800 1900 2000
Time A.D. Figure 1. Comparison of proxy-based NH temperature reconstructions [Jones et al, 1998; Mann et al, 1999; CrowZey andLowery, 20001 with model simulations of NH mean temperature changes over the past millennium based on estimated radiative forcing histories [Crowley, 2000; Gerber et al, 2002--results shown for both a 1.5' C I 2 x C02 and 2.5" C I 2 x C02 sensitivity; Bauer et al, 20031. Also shown is an independent reconstruction of warm-season extra-tropical continental NH temperatures [Esper et al, 20021. All reconstructions have been scaled to the annual, full Northern Hemisphere mean, over an overlapping period (1856-1980), using the NH instrumental record [Jones et al, 19991 for comparison, and have been smoothed on time scales of >40 years to highlight the long-term variations. The smoothed instrumental record (1856-2000) is also shown. The gray shading indicates estimated two-standard error uncertainties in the Mann et a1 [1999] reconstruction [Adapted from Mann, 20021.
9 0 REFERENCES 1. Bauer, E., Claussen, M., Brovkin, V., Assessing climate forcings of the earth system for the past millennium, Geophys. Res. Lett., 30 (6), 1276-1279, doi: 10,1029/2002GL016639,2003. 2. Bradley, R.S., Paleoclimatology: reconstructing climates of the Quaternary. Harcourt , Academic Press, San Diego, 61Opp, 1999. 3. Bradley, R.S., and P.D. Jones, "Little Ice Age" summer temperature variations: their nature and relevance to recent global warming trends, The Holocene, 3, 367376,1993. 4.Bradley, R.S., Briffa, K.R., Crowley, T.J., Hughes, M.K., Jones, P.D, Mann, M.E., Scope of Medieval Warming, Science, 292,201 1-2012,2001. S.Briffa, K.R., T.J. Osbom, F.H. Schweingruber, I.C. Harris, P.D. Jones, S.G. Shiyatov, S.G. and E.A. Vaganov, Low-frequency temperature variations from a northern tree-ring density network. J. Geophys. Res., 106,2929-2941,2001. 6. Briffa, K.R., and T.J. Osbom, Seeing the Wood from the Trees, Science, 284, 926-927, 1999. 7. Crowley, T.J., Causes of Climate Change Over the Past 1000 Years, Science, 289, 270-277, 2000. 8. Crowley, T.J., and T. Lowery, How Warm Was the Medieval Warm Period?, Ambio, 29, 51-54,2000. 9.Esper, J., E.R. Cook and F.H. Schweingruber, Low-frequency signals in long treeline chronologies for reconstructing past temperature variability, Science, 295, 2250-2253,2002. 10. Folland, C.K., T.R. Karl, J.R. Christy, R.A. Clarke, G.V. Gruza, J. Jouzel, M.E. Mann, J. Oerlemans, M.J. Salinger, S.-W. Wang, Observed Climate Variability and Change, in Climate Change 2001: The ScientiJic Basis, edited by J.T. Houghton et al., pp. 99-181, Cambridge Univ. Press, New York, 2001. 11. Gerber, S., F. Joos, P. Briigger, T. F. Stocker, M. E. Mann, S. Sitch, and M. Scholze, Constraining temperature variations over the last millennium by comparing simulated and observed atmospheric COz, Climate Dynamics, 20,28 1299,2003. 12. Huang, S., H. N.Pollack and P.-Y. Shen, Temperature Trends Over the Past Five Centuries Reconstructed from Borehole Temperature, Nature 403,756-758,2000. 13. Hughes, M.K. and H.F. Diaz, "Was there a 'Medieval Warm Period' and if so, when and where?" Climatic Change, 26, 109-142. 1994. 14. Jones, P.D., M. New, D.E. Parker, S. Martin, and I.G. Rigor, 1999: Surface air temperature and its changes over the past 150 years. Reviews of Geophysics 37, 173-199. 15. Lamb, H.H., The early medieval warm epoch and its sequel, Paleoceanogaphy, Paleoclimatology, Paleoecology, 1, 13-37, 1965.
91
16. Mann, M.E., The Value of Multiple Proxies, Science, 297, 1481-1482,2002. 17. Mann, M.E., R.S. Bradley, and M.K. Hughes, Global-scale temperature patterns and climate forcing over the past six centuries, Nature, 392, 779-787, 1998. 18. Mann, M.E., R.S. Bradley, and M.K. Hughes, Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations, Geophysical Research Letters, 26, 759-762, 1999. 19. Mann, M.E., Rutherford, S., Bradley, R.S., Hughes, M.K., Keimig, F.T., Optimal Surface Temperature Reconstructions Using Terrestrial Borehole Data, Journal of Geophysical Research, 108 (D7), 4203, doi: 10.1029/2002JD002532,2003. 20. Shindell, D.T., Schmidt, G.A., Mann, M.E., Rind, D., Waple, A., Solar forcing of regional climate change during the Maunder Minimum, Science, 294, 2149-2152, 2001. 21. Wigley, T.M.L., Ingram, M.J., Farmer, G., Past Climates and their impact on Man: a review. in Climate and Histoly, Cambridge University Press, eds T.M.L. Wigley, M.J. Ingram, G. Farmer, pp. 3-50, 1981. 22. Yang, B., Braeuning, A., Johnson, K.R. and Yafeng, S., General characteristics of temperature variation in China during the last two millennia, Geophys. Res. Letts., 10.1029/2001GL014485,2002.
EVIDENCE FOR GLOBAL WARMING
DAVID PARKER AND CHRIS FOLLAND Hadley Centre, Met Office, U.K. ABSTRACT We review the evidence for global warming, firstly focusing on ocean surface water and air temperatures, and on air temperatures measured at land stations. We assess the various uncertainties involved in the use of these data to monitor climate, namely time-varying biases, random and sampling errors, and the uncertainties arising from the absence of data in substantial regions of the world especially in the earlier parts of the instrumental record. We then consider the radiosonde and satellite temperature records for the atmosphere up to the lower stratosphere. In the troposphere the pattern of warming is somewhat different from that at the surface, while in the lower stratosphere cooling is seen, as would be expected with an increase in greenhouse gases and a decrease in ozone. We offer an explanation as to why almost worldwide surface warming in the last 25 years has not been accompanied by warming in the tropical troposphere. SOURCES OF UNCERTAINTY IN TEMPERATURE ANALYSES The major sources of uncertainty in estimating regional and global temperature deviations’ and trends are: incomplete geographical and temporal coverage of data; random measurement and sampling errors; and uncertainties due to systematic biases andor in the bias-corrections. Application of optimum averaging techniques, and additional estimates of bias-uncertainties in global blended land surface air temperatures and sea surface temperatures, show that the influences of data gaps, and of random measurement and sampling errors, are much smaller than the effects of uncertainties in our knowledge of the biases (Figure 1 and Folland et al., 2001b). Despite the latter uncertainties, the total two standard error uncertainty is only about one third of the warming signal, as expressed in the conclusion of the Intergovernmental Panel on Climate Change that global warming in the twentieth century was 0.6 *0.2“C. Broadly similar results are seen for the two hemispheres.
’ These are usually expressed as differences from a reference or “normal” period such as 1961- 1990, and termed “anomalies”. 92
93
0.5
1 1900
1950
2000
FIGURE 1.: Global surface temperature anomalies PC), 1861-2000, smoothed ) estimated porn with a 21-term binomial filter. Uncertainties (shading: 5 2 ~ were annual values allowing for serial correlation. Dark shading: uncertainties from reduced space optimal averaging alone; intermediate shading: with added sea surface temperature bias-correction and urbanisation uncertainties; light shading: also including land thermometer exposure uncertainties. From Folland et al., 2001b. It is worth pointing out that the situation would be somewhat different on substantially smaller space scales. Here uncertainties, due to data gaps in particular, would increase quite strongly in the late nineteenth and early twentieth centuries, except for a few well-observed regions like Europe.
REASONS FOR CONFIDENCE IN TEMPERATUREANALYSES The agreement between global trends of land surface air temperature, marine air temperature and sea surface temperature over the last 150 years (Figure 2, updated and slightly amended from P C C 2001 (Folland et al, 2001a)) constitutes strong evidence for the reality of the trends, because these data sets are obtained by almost independent methods, and wholly independent after 1893. The agreement between the land surface air temperature and the sea surface temperature is supported (Folland et al., 2001b) by the close agreement between the observed trends of land surface air temperature and those simulated by an atmospheric model forced with the observed sea surface temperatures when these include the Folland and Parker (1995) bias corrections used in Fig 1. A third strand in our confidence is from circumstantial evidence: worldwide glacial retreat, reduction of Arctic sea-ice cover, changes in the temperature of the ground as measured from borehole data, and phenological changes.
9 4
-
Annual Anomalies, 1860 June 2003 Gtobat Average Temperatures
1860
1880
I800
1920
1a40
1960
Year
Figure 2.: Comparison of long-term changes in global land surface air temperature, marine air temperature and sea surface temperature. Note that the recent land data show significantly more warming than in Folland et a1 (2001a). This is due to very warm years over land in 2001 and 2002 and some improvements in the land data themselves (Jones and Moberg, 2003).
PARADOX OF THE COOLING TROPICAL TROPOSPHERE In the late 1950s the radiosonde network became sufficiently widespread to allow estimates of global temperature anomalies. Since then, overall global warming trends at the surface and in the low-mid troposphere (2-10 km aloft) have been similar (Figure 3). However between the mid-1960s and around 1980 the surface warmed less than the air aloft globally, whereas since then the reverse has been true (inset to upper panel of Figure 3). In particular, since the initiation of satellite Microwave Sounding Unit temperature retrievals in 1979, the globally averaged troposphere has warmed less than the surface, resulting in widespread controversy (Wallace et al. (U.S. National Research Council), 2000; Folland et al., 2001a). In fact the key changes are mainly seen in the global tropics. Problems of urbanisation at the surface can be discounted as a major cause. This is because the satellite and surface trends are almost identical over USA and Europe, regions where recent strong urbanisation might most be suspected as a factor; and in addition much of the tropics, where the major discrepanciesoccur, are oceanic.
95
Seasonal lower tropospheric anomalies, DJF f95711&8 to S%N 2002
and surFar;e tern eratu
Seasanal lower stratosnheric
Figure 3.: Global temperature variations for (a) the su$ace and low-mid troposphere and (8) the lower stratosphere. In (a), Microwave Sounding Unit MSU 2LT retrievals since 1979 (Christy et al., 2003; heavy black) are compared with radiosonde-based temperatures (Parker et al., 1997; thin black) and surface data (Jones et al., 2001; grey) since 1958. In (8). MSU Channel 4 and Stratospheric Sounding Unit SSU 15Xretrievals since 1979 (Spencer and Christy 1993; Christy et al.. 2003, heavy black) are compared with radiosonde-based temperatures (Parker et al. 1997; thin black) since 1958. Values (deg c)are for 3-month running seasons ending September to November 2002, and are expressed relative to the 1981-2000 average. The stratospheric radiosonde data are adjusted to compensatefor known instrumental changes since 1979 using MSU Channel 4 as a reference.
96 In the tropics, observations of the troposphere (up to 15km) since the late 1970s actually show a slight cooling. (Figure 4). This is contrary to model predictions and differs from trends at the tropical surface. Qualitative agreement between radiosonde and satellite microwave sounding unit data in the lower and middle troposphere suggests that the cooling is real. It is most significant in areas where deep atmospheric convection occurs. THE STRATOSPHERE Cooling of the stratosphere is expected when there is an increase in carbon dioxide, because carbon dioxide is a strong emitter of long wave radiation, and in the stratosphere such emissions to space are relatively unhindered. However the magnitude of recent global lower-stratospheric cooling (Figures 3 and 4) is considerably greater than expected from carbon dioxide alone, and can only be explained when observed reductions in lower-stratospheric ozone are taken into account (World Meteorological Organization, 2002). Even though the reduction of ozone in the tropical stratosphere has been small, strong stratospheric cooling is truly global, pastly because of the worldwide influence of carbon dioxide, though the cooling in the lower stratosphere is not statistically significant near the equator (Figure 4 and World Meteorological Organization, 2002) where there is strong, natural, inter-annual variability. There is some evidence that the cooling of the stratosphere has altered the heat balance and the convective characteristics of the air below in the tropics: this is a subject of ongoing research but may partially explain the results in Figure 4. There may be other partial explanations. Thus the heat balance of the tropics may vary on multi-decadal time scales due, for example, to changes in the behaviour of the El Niiio phenomenon.
97
_. .
Latitude
JJA
DJF -
snn
snn
90°N
60ON
30‘N
0
Latitude
30%
60%
90%
90°N
60°N
30°N
0 Latitude
Figure 4.: Trends in zonal-mean temperature anomalies for 1979-2001 (“Udecade), based on radiosonde (Parker et al., 1997) and surface (Jones and Moberg, 2003) data. In the radiosonde dataset, stratospheric iemperatures were adjusted to be consistent with MSU retrievals for the lower stratosphere (Christy et al., 2000) where known instrument changes occurred and the biases werefound to be signijicant using a student’s t-test. The thick black contour indicates trends significant at the 5% level (two-sided t-test). Top: annual; lower left: December to February: lower right: June to August.
30’5
60
98
CONCLUSIONS We can now estimate uncertainties in global and regional temperature trends. The changing biases in the data contribute most to these uncertainties on global or hemispheric space scales. To place future estimates of observed climatic changes on a surer footing, a set of Climate Monitoring Principles for the Global Climate Observing System has been endorsed by the World Meteorological Organization and the United Nations Framework Convention on Climate Change (Appendix 1). It is intended to apply these principles to a comprehensive Global Climate Observing System which will evolve from the current less satisfactory system that is more attuned to the less demanding needs of weather forecasting. Nevertheless, despite the substantial uncertainties in the observational record, both the direct and indirect evidence for global warming at the Earth’s surface is strongly leading IPCC 2001 to conclude that global warming over the last century is “virtually certain” (a probability exceeding 99%). However, much remains to be understood regarding temperature trends aloft. The requirement for a homogeneous, globally complete Global Climate Observing System, integrating satellite) as well as and in situ observing networks, is clear (Appendix 1). REFERENCES 1. Christy, J.R., Spencer, R.W. and Braswell, W.D., 2000: MSU tropospheric temperatures: dataset construction and radiosonde comparisons. Journal of Atmospheric and Oceanic Technology, 17,1153-1170. 2. Christy, J.R., Spencer, R.W., Noms, W.B., Braswell, W.D. and Parker, D.E., 2003: Error estimates of Version 5.0 of MSU/AMSU bulk atmospheric temperatures. Journal of Atmospheric and Oceanic Technology, 20,613-629. 3. Folland, C.K. and Parker, D.E., 1995: Correction of instrumental biases in historical sea surface temperature data. Quarterly Journal of the Royal Meteorological Society, 121,319-367.
4. Folland, C.K., Karl, T.R., Christy, J.R., Clarke, R.A., Gruza, G.V., Jouzel, J., Mann, M.E., Oerlemans, J., Salinger, M.J. and Wang, S.-W., 2001a: Observed Climate Variability and Change. In: Climate Change 2001: The Scientijk Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, ed. J. T. Houghton et al., Cambridge Univ. Press. 5.
Folland, C.K., Rayner, N.A., Brown, S.J., Smith, T.M., Shen, S.S.P., Parker, D.E., Macadam, I., Jones, P.D., Jones, R.N., Nicholls, N. and Sexton, D.M.H. 2001b: Global temperature change and its uncertainties since 1861. Geophysical Research Letters, 28,262 1-2624.
6. Jones, P.D. and Moberg, A., 2003: Hemispheric and large-scale surface air temperature variations: an extensive revision and an update to 2001. Journal of Climate, 16,206-223.
99 7. Jones, P.D., Osborn, T.J., Briffa, K.R., Folland, C.K., Horton, E.B., Alexander, L.V., Parker, D.E. and Rayner, N.A., 2001: Adjusting for sampling density in grid-box land and ocean surface temperature time series. Journal of Geophysical Research, 106,3371-3380. 8. Parker, D.E., Folland, C.K. and Jackson, M., 1995: Marine surface temperature: observed variations and data requirements. Climatic Change, 31, 559-600. 9. Parker, D.E., Gordon, M., Cullum, D.P.N., Sexton, D.M.H., Folland, C.K. and Rayner, N., 1997: A new gridded radiosonde temperature database and recent temperature trends. Geophysical Research Letters, 24, 1499-1502. 10. Spencer, R.W. and Christy, J.R., 1993: Precision lower stratospheric temperature monitoring with the MSU: technique, validation and results 1979-1991. Journal of Climate, 6, 1194-1204. 11. Wallace, J. M., Christy, J. R., Gaffen, D. J., Grody, N. C., Hansen, J. E., Parker, D. E., Peterson, T. C., Santer, B. D., Spencer, R. W., Trenberth, K. E. and Wentz, F. J., 2000: Reconciling Observations of Global Temperature Change. U.S. National Research Council, National Academy Press, Washington, D. C., 85PP. 12. World Meteorological Organization, 2002: Scientific Assessment of Ozone Depletion: 2002. WMO Global Ozone Research and Monitoring Project Report No. 47. 13. Other bibliography referred to in the presentations: 14. Diaz, H.F., Folland, C.K., Manabe, T., Parker, D.E., Reynolds, R.W. and Woodruff, S.D., 2002: Workshop on Advances in the Use of Historical Marine Climate Data (Boulder, Co., USA, 29th Jan - 1'' Feb 2002). WMO Bulletin, 51 (4), 377-380. 15. Parker, D.E., 1994: Effects of changing exposure of thermometers at land stations. International Journal of Climatology, 14, 1-31. 16. Rayner, N.A., Parker, D.E., Horton, E.B., Folland, C.K., Alexander, L.V, Rowell, D.P., Kent, E.C. and Kaplan, A., 2003: Global analyses of sea surface temperature, sea ice and night marine air temperature since the late nineteenth Journal of Geophysical Research (Atmospheres), century. 10.1029/2002JD002670.
100
Appendix 1. Global Climate Observing Principles endorsed by the World Meteorological Organization at its 14'h Congress, 2003.
Effective monitoring systems for climate should adhere to the following principles *:
1. The impact of new systems or changes to existing systems should be assessed prior to implementation.
2. A suitable period of overlap for new and old observing systems is required. 3. The details and history of local conditions, instruments, operating procedures, data processing algorithms and other factors pertinent to interpreting data (i.e., metadata) should be documented and treated with the same care as the data themselves.
4. The quality and homogeneity of data should be regularly assessed as a part of routine operations. 5 . Consideration of the needs for environmental and climate-monitoring products
and assessments, such as IPCC assessments, should be integrated into national, regional and global observing priorities. 6. Operation of historically-uninterrupted stations and observing systems should be maintained.
7. High priority for additional observations should be focused on data-poor regions, poorly-observed parameters, regions sensitive to change, and key measurements with inadequate temporal resolution. 8. Long-term requirements, including appropriate sampling frequencies, should be specified to network designers, operators and instrument engineers at the outset of system design and implementation. 9. The conversion of research observing systems to long-term operations in a carefully-planned manner should be promoted. 10. Data management systems that facilitate access, use and interpretation of data and products should be included as essential elements of climate monitoring systems.
Furthermore, operators of satellite systems for monitoring climate need to: (a) Take steps to make radiance calibration, calibration-monitoring and satellite-tosatellite cross-calibration of thefull operational constellation a part of the operational satellite system; and (b) Take steps to sample the Earth system in such a way that climate-relevant (diurnal, seasonal, and long-term interannual) changes can be resolved.
101
n u s satellite systems for climate monitoring should adhere to thefollowing specijfc principles: 11. Constant sampling within the diurnal cycle (minimizing the effects of orbital decay and orbit drift) should be maintained. 12. A suitable period of overlap for new and old satellite systems should be ensured for a period adequate to determine inter-satellite biases and maintain the homogeneity and consistency of time-series observations.
13. Continuity of satellite measurements (i.e. elimination of gaps in the long-term record) through appropriate launch and orbital strategies should be ensured. 14. Rigorous pre-launch instrument characterization and calibration, including radiance confirmation against an international radiance scale provided by a national metrology institute, should be ensured.
15. On-board calibration adequate for climate system observations should be ensured and associated instrument characteristics monitored. 16. Operational production of priority climate products should be sustained and peerreviewed new products should be introduced as appropriate.
17. Data systems needed to facilitate user access to climate products, metadata and raw data, including key data for delayed-mode analysis, should be established and maintained. 18. Use of functioning baseline instruments that meet the calibration and stability requirements stated above should be maintained for as long as possible, even when these exist on de-commissioned satellites.
19. Complementary in situ baseline observations for satellite measurements should be maintained through appropriate activities and cooperation. 20. Random errors and time-dependent biases in satellite observations and derived products should be identified.
This page intentionally left blank
3.
ENDOCRINE DISRUPTING CHEMICALS
This page intentionally left blank
THE EMERGING SCIENCE OF ENDOCRINE DISRUPTION J.P. MYERS Environmental Health Sciences, White Hall, USA L.J. GUILLETTE, JR. Department of Zoology, University of Florida, Gainesville, USA P. PALANZA S. PARMIGIANI Dipartimento di Biologia Evolutiva e Funzionale, Parma University, Parma, Italy
S.H. SWAN Department of Family and Community Medicine, University of Missouri-Columbia School of Medicine, Columbia, USA
F.S. VOM SAAL Division of Biological Sciences, University of Missouri-Columbia, Columbia, USA In this essay we provide an overview of the emerging science of endocrine disruption. We begin with a brief definition and a consideration of several inter-related concepts that are forcing important conceptual shifts in the theory and practice of toxicological science. We then examine some of the empirical bases for these conceptual shifts, first from studies of wildlife, then from studies with laboratory animals. In a final section, we shift to a consideration of the considerable challenges this new view of toxicology creates for human epidemiology. Some progress, however, is being made towards a more “environmentally-sensitive epidemiology” which we describe briefly and illustrate with a recent example relating to marked regional differences in sperm count in men living in the USA.
BACKGROUND Definitions All living organisms depend upon a large and intricate array of chemical signaling systems to guide biological development and regulate cell and organ activity (McLachlan 2001). Over the past two decades, scientific interest in the ability of many environmental contaminants to interfere with these sensitive systems has grown dramatically. A hybrid science, the study of endocrine disruption, has arisen from concerns about the effects of these phenomena on health and the environment (Colbom et al. 1996). This science incorporates findings and methodologies from multiple disciplines including toxicology, endocrinology, developmental biology, molecular biology, ecology, behavioral biology and epidemiology. Endocrine disrupting chemicals (EDCs) are chemicals that can disrupt thyroid hormones, androgens, estrogens and other endocrine processes. EDCs disrupt development by interfering with the hormonal signals that control normal development of the brain and other organ systems. EDCs can also affect adults by similar mechanisms
105
106
because these same hormones also play important regulatory roles in adults (Colbom et al. 1993; Colbom et al. 1998). EDCs can act at very low levels of exposure to produce profound effects on the course an organism follows from fertilized oocyte through to maturity, adulthood and death. The effects of EDCs on developing organisms are of greatest concern, since the disruptive effects of developmental exposure are permanent and irreversibletermed organizational effects-whereas EDC exposure produces measurable, activational effects in adults that may be reversible. A related field of research, “developmental origins of health and adult disease” is converging with research on endocrine disruption to consider how exposures during different stages of development, particularly during fetal life, contribute to adult chronic diseases including obesity, heart disease, diabetes, decreased fertility, impaired immune function and neurological deficits. Data accumulated over the past two decades reveal substantial global contamination by EDCs. Contaminant dispersal is brought about by a combination of factors, including purposeful or accidental release into the environment followed by long-range atmospheric transport. It also occurs because some EDCs have been incorporated inadvertently into consumer products. With regard to long-range transport, large masses of air have been tracked across the Pacific carrying a variety of pollutants from central Asia to the west coast of the US virtually undiluted, including ozone, heavy metals and organochlorine compounds. In addition, so-called “global distillation” processesrepeated sequences of volatilization and condensation-transport semi-volatile compounds from sites of production, use and disposal to colder regions, particularly at high latitude and altitude. Two of many examples of inadvertent contamination of people due to the use of consumer products include exposure to phthalates and bisphenol A (Myers 2003). Phthalates are used as additives in cosmetics, intravenous tubing and other polyvinyl chloride (PVC) plastics. Polyvinyl chloride products contain phthalates to soften the otherwise brittle PVC. Exposure to bisphenol A is also widespread. Bisphenol A is a monomer (not just an additive) used in the manufacture of resins that line the inner surface of food cans (over 100 billion manufactured per year in the USA alone), and to manufacture polycarbonate plastic, which is used to make food and beverage containers. Phthalates and bisphenol leach from these products and disrupt endocrine function. Coincident with emerging knowledge of the ability of EDCs to disrupt a range of developmental processes, a series of emerging human epidemics have been reported. These include increases in the frequency of preterm birth, obesity, cognitivehehavioral dysfunctions (such as autism and attention deficit hyperactivity disorder, ADHD), and decreases in reproductive function (such as a decline in sperm count) and immune function. The strength of the epidemiological evidence demonstrating these epidemics varies. There is little argument that there has been a wide-spread increase in rates of obesity and diabetes, but there is still significant debate about global decreases in reproductive function or increases in ADHD, due to limitations of historical data. While extensive study will be required to identify causes of these trends, their underlying biology suggests that alterations in inter- and intra-cellular signaling processes may be causally involved, and for each of the mentioned epidemics, data are available indicating one or more points of vulnerability to EDCs in the mechanisms of control. EDCs may also contribute importantly to geographic variability in these health endpoints.
107
Demonstration of such variation in semen quality and its relationship to current-use pesticides as a probable cause is one such example, discussed fkrther below. Theoretical concepts New scientific findings on these issues are emerging at an exponential rate. Central to these findings is a reformulation of the traditional dichotomy between nature and nurture (the gene vs. environment argument) in the causation of disease (Figure 1). NEW
TRADITIONAL HEREDITY
PHENOTYPE What we become
HEREDITY
PHENOTYPE: What we become
Fig 1.: Contrasting traditional with new formulations of the interactions of genes and environment in the determination ofphenotype. Traditionally, genetic diseases have been seen us determined by heredity. In the new formulation, genetic patterns ofgene expression are vulnerable to disruption by environmental contaminants at multiple points in the sequence of steps that lead to gene expression, thereby rendering genetic diseuses susceptible to modijkation by environmentalfactors. In the old formulation, that which is “nature” is based on genes, while “nurture” comes from the environment, sensu latu. Functional status and disease linked to genes have been perceived as completely determined by heredity. Diseases traditionally viewed as non-hereditary (“environmental”) can be caused by a wide array of exposures, stressors, experiences, nutrition and other life-style factors. Concern about environment’s interaction with the genetic determination of disease and functional differences has focused traditionally upon two pathways: 1. high-dose chemical exposures causing mutations and thus alterations in the base sequence of genes, and 2. genetic variation among individuals leading some to be more susceptible than others to certain contaminants. The study of endocrine disruption today is turning this historical conceptualization on its head. Rather than simply being a factor determined by inheritance, a property linked to a gene is one that is vulnerable to environmental disruption, particularly by EDCs. This is because EDCs acting at low levels can act by interfering with gene expression and other cellular activities. Clearly some functional deficits and disease states are due to inherited mutations in genetic makeup, but many more diseases may be associated with alterations in gene expression. Initially, the majority of research on EDCs focused on interference with gene activation by the hormone 17P-estradiol (the most potent endogenous estrogen). Many EDCs can stimulate genes and other cellular processes in a manner similar to estradiol,
108
whereas other EDCs antagonize estradiol or block the synthesis of estradiol (Welshons et al. 2003). However, over the last decade, EDCs have been shown to disrupt many other endogenous hormonal signaling molecules, including virtually all steroid hormones that have been carefully tested, as well as thyroid, retinoid, leptin, some transcription factors, growth factors and other molecules not traditionally classified as hormones. One recent study even documents interference with chemical signaling between two symbiotic organisms, the bacterium Rhizobium and its leguminaceous host (Fox et al. 2001). The presumption now is that any chemically-mediated signaling system is vulnerable, in principle, to disruption by chemicals to which wildlife and humans are exposed to in their daily lives. Given the enormous potential for EDCs to interfere with gene expression, how many of the 80,000+ chemicals registered for commercial use have endocrine disrupting activity? The vast majority of chemicals have not been tested in even the most basic way. Far fewer have been tested for endocrine disrupting effects, particularly during embryonic development, the most vulnerable time in life. Altered gene expression during organismal development can induce dramatic changes in developmental outcomes, that is, the disruption is irreversible. Known effects of EDCs range from structural changes to functional deficits. For example, alterations in the production of hormone receptors in tissues through the alteration in the expression of genes for these receptors have been shown in experiments with laboratory animals, and these changes can then lead to altered responses to hormonal stimulation throughout the remainder of life. This can, in turn,lead to altered (increased or decreased) susceptibility to contamination with hormonal activity (Gupta 2000; Richter et al. 2000). Altered gene expression and cellular signaling subsequent to development can cause transient changes, termed activational responses, or particularly through carcinogenesis, permanent detrimental effects. For example, lifetime exposure to estrogen is the best predictor of breast cancer in women, and exposure to EDCs that are “environmental estrogens” could plausibly increase breast cancer risk. Thus, the impact of EDCs will vary depending upon a variety of factors, including when in the life-cycle of the organism exposure occurs, as well as the duration and amount of exposure. Until recently, the great importance of life stage, the very great vulnerability of the embryo, and the fact that consequences of fetal exposure could be entirely different from those seen from adult exposure had not been appreciated. Collectively, these new data from studies of EDCs are forcing a series of conceptual shifts that undermine long-held assumptions underlying toxicological studies and the applications of results from these studies to developing public health standards. Foremost among these is a challenge to the operating assumption concerning appropriate dose. Focusing on traditional toxicological endpoints, such as gene mutations, weight loss and death, toxicologists customarily worked at what now are viewed as very high doses, typically in the range of parts per million and parts per thousand levels. New data suggest that extremely low doses of EDCs (in the part per billion and even part per trillion range) can cause measurable and highly significant endocrine disruption. A growing array of studies is revealing changes in gene expression, including both gene suppression and gene activation, as a result of low-level exposure to EDCs. For example, recent work on arsenic, long-established to be toxic at high doses, has revealed that at part per billion levels, arsenic can interfere with glucocorticoid
109
activation of genes involved in the control of metabolism, response to stress, immune fimction and the suppression of tumor formation (Kaltreider et al. 2001). A second important conceptual shift arises from consistent findings that during the life cycle of an organism, developmental stages are typically far more vulnerable to signal disruption than adult. stages. This is thought to occur for several reasons, including the absence of l l l y developed protective enzyme systems and higher metabolic rates. Most importantly, however, the events underway in development involve a series of organizational choices that are irreversible once the “choice” in development is determined. In sharp contrast, in adults, the processes at play can very often be reversed by removing the EDC, thus returning gene expression levels and organ functioning to normal; these transient effects are termed “activational” effects. One recent example documenting extraordinary differential sensitivity of adult versus developing life-stages was presented by Hayes et al. (2002), finding adverse effects in tadpoles at 1/30,000ththe lowest concentration of atrazine, a herbicide, found to produce adverse effects in adults. One clear implication of this focus on low level exposure during fetal and neonatal development is that levels of exposure that have been dismissed as “background” and thus “safe” can have deleterious effects. This had never been realized due to the absence of any studies using these low doses combined with virtually no studies of developmental effects at any dose level. Many laboratory studies now support the conclusion of high sensitivity of the embryo and neonate, as do some epidemiological data from human studies. For example, a series of studies of children born to Dutch mothers, exposed to polychlorinated biphenyls (PCBs) and dioxin through consumption of fish and other food in Dutch markets, have shown that low parts-per-billion concentrations of these contaminants impair cognitive (Koopman-Esseboom 1996) and immune system development (Weisglas-Kuperus 2000). One reported consequence of exposure is a shift in the pattern of play behavior in boys toward patterns more typical of girls (Vreugdenhil 2002). Toxicological experiments, particularly those used to develop regulatory standards of acceptable levels of exposure to environmental chemicals have been based upon the assumption that “the dose makes the poison”, which implies that high doses invariably cause more harm than lower doses. It has been a surprise to regulators to learn that for hormonally active chemicals, this assumption may not be valid. Endocrinologists and physicians have known for decades that very high doses of hormones and drugs can block rather than stimulate some responses, resulting in what is referred to as a non-monotonic dose-response relationship (effects initially increase and then decrease with increasing dose).
110
Concentration (molar) Figure 2.: Proliferative response of MCF-7 cells to 17-$’ estradiol over 10 orders of magnitude. Responses in the ‘physiological range” are mediated by binding with the estrogen receptor. Those in the “toxicologicalrange” reflect cell death. Adaptedfvom Welshons et al. 2003. For example, recent work by Welshons et al. (2003), examined effects in response to estradiol exposure in a line of human breast cancer cells (Figure 2). Estradiol levels between 0.1 and 100 parts per trillion produced an increased growth response in breast cancer cells, because at these levels, an increase in exposure causes an increase in the number of estrogen receptors bound by estradiol, thus leading to increased gene activation. At exposure levels in the typical toxicological dose range (part per million range), further increases in the dose of estradiol began to produce cell death. This result is extremely important for regulatory toxicology, because the high level exposures in these experiments are analogous to those used for the prediction of risk posed by low doses, but the actual effects of low doses predicted to be safe have, until very recently, never been examined experimentally (vom Saal and Sheehan 1998). Changes in dose within this very high part per million dose range cannot reveal variations in receptor-mediated gene activation, since all receptors are occupied at doses which are millions of times lower. Hence, testing EDCs at only very high doses is likely to miss signal disruption events that can be expected to occur at much lower levels of exposure. Identification of low-dose effects that are different from those seen at high doses, the importance of timing of exposure, recognition of the unique effects that can be disrupted during development, and genetic variation in genetically-determined susceptibility, render the overly simplistic assumptions previously used in risk assessment invalid for many environmental chemicals. WILDLIFE AND LABORATORY STUDIES For many decades, we have been concerned with the effects of environmental contaminants on the health and persistence of wildlife populations. Prior to work over the last 10 to 15 years, the vast majority of these studies examined the lethal
111
consequences of exposure, or they focused on the induction of cancer or major birth defects. Although these endpoints are still critical in the study of toxicology, a growing collection of studies examining diverse wildlife species demonstrates that additional adverse outcomes can be produced in wildlife as a result of exposure to environmental contaminants. A number of these abnormalities have been attributed to the disruption of endocrine signaling (see Colbom and Clement, 1992; Guillette and Crain, 2000). Below, we examine just a few examples of endocrine disruption in wildlife. Fish, Vitellogenesis and Sewage In the 199Os, reports were published documenting that male fish living below sewage outfalls in Europe, Great Britain, North American and Japan had elevated plasma concentrations of the yolk protein vitellogenin (see Sumpter and Jobling, 1995). Naturally, vitellogenin is synthesized in the liver of the female following stimulation by elevated plasma estrogens of ovarian origin. Males of many vertebrate classes, including fish, amphibians and reptiles, have the ability to synthesize vitellogenin if stimulated by estrogen, although this does not occur normally. Intensive chemical fractionation of sewage identified two major classes of compounds capable of acting as estrogens in male fish, these included the pharmaceutical estrogen, ethinyl estradiol, and the industrial chemicals, nonylphenol and octylphenol. Ethinyl estradiol is a common ingredient o f the human birth control pill and is excreted in the urine of females taking this pharmaceutical agent. Ethinyl estradiol has been identified in the surface and reclaimed sewage waters from all continents where such studies have been performed (Kolpin et al. 2002). Similarly, nonylphenol, an alkylphenolic chemical, is widely used in industrial applications as a surfactant and is commonly released into the environment. It is persistent in the ecosystem with very large concentrations found associated with sediments and organic matter in freshwater and estuarine regions. It has been shown to be weakly estrogenic in mammalian laboratory animals, but is a potent estrogen in many fish (White et al., 1994). Laboratory-based life-cycle testing with ecologically relevant concentrations has shown that both of these compounds have adverse effects on the reproductive potential of males and females, and they also alter sex determination in developing embryos (Tyler and Routledge 1998). These common pollutants have the potential to disrupt the health of individual animals and the persistence of populations; some populations have no males. It has also been suggested that endocrine disruption could be associated with the decline of commercial and sport fish populations. Alligators and Pesticides Alligators and crocodiles are long-lived top predator species inhabiting most subtropical and tropical wetlands. Studies begun in the late 1980s reported abnormalities in central and south Florida (USA) populations of the American alligator exposed to various contaminant mixtures associated with modem agriculture, such as insecticides, herbicides and fertilizers (Guillette et al. 2000). These abnormalities include altered plasma sex steroid profiles, gonadal, genital and immune tissue anatomy, and hepatic steroid metabolism (Guillette and Gunderson 200 1). Specifically, male alligators exposed in ovo (as embryos) to various pesticides, due to deposition in the eggs prior to being laid by the female, exhibit significantly reduced plasma testosterone concentrations, aberrant testicular morphology, and small penis size. Females from the
112
same contaminated locations displayed significantly elevated plasma concentrations of estradiol as neonates but reduced concentrations as sub-adults. Sub-adult females also had elevated plasma concentrations of the potent androgen dehydrotestosterone. They also exhibit a high frequency of polyovular follicles, an ovarian abnormality associated with low fertility and high embryonic mortality. These contaminated populations have shown elevated embryonic mortality greater than 50%. Polyovular follicles are a documented outcome of exposure of women to the estrogenic drug diethylstilbestrol during fetal life exposed as fetuses due to their mothers taking this drug during pregnancy, and these exposed women also suffer a decrease in fertility. Populations displaying these abnormalities have elevated egg, tissue or serum concentrations of a wide range of organochlorine pesticides or their metabolites, heavy metals and other widely used agricultural chemicals, such as nitrates. Experimental exposure of developing alligator embryos to various organochlorine pesticides or their metabolites induces many of the abnormalities seen in wild populations, such as altered plasma hormone profiles and small penis size as well as altered sex determination (Matter et al. 1998). Concentrations required to induce these abnormalities were in the part per trillion to part per billion range, 100-1000 times lower in concentration than the reported levels in alligator eggs or serum. Recent studies of Mosquito fish from the same contaminated lakes indicate that the reported abnormalities are not limited to a single lake or species, as male Mosquito fish have reduced tissue concentrations of testosterone, lower sperm counts and altered reproductive behavior (Toft et al. 2003). Fish and Pulp Mill Effluent Many studies have documented the detrimental effects of pulp mill effluent on the environment over many decades. Classical ecotoxicology studies reported wide scale disruption of populations, including the local extinction of many exposed freshwater or estuarine fish and invertebrate populations. Although modifications in the processing of pulp mill effluent have occurred over time, abnormalities persist. Studies from several Canadian locations report altered hormone profiles in fish exposed to pulp mill effluent, including alterations in hypothalamic, gonadal and adrenal hormones (McMaster et al. 1996). Exposed fish displayed altered stress responses and altered reproductive performance. Masculinization of females has also been reported. For example, female Mosquito fish living below effluent outfalls from paper pulp mills develop a gonopodium, a modified anal fin found in males of this species and used to transfer sperm to the female for internal fertilization (Davis and Bortone, 1992). The gonopodium develops in the male following exposure to androgens, specifically testosterone. Masculinized females do reproduce but have lower production of offspring and greatly elevated levels of aromatase activity in their brain and ovary (Orlando et al. 2002). Aromatase is an enzyme that converts the hormone testosterone to estradiol, the principle estrogen in these females. These females thus have an impaired potential to produce this critical hormone that regulates reproduction.
113
Fish. Feedlots and Pharmaceuticals Modem animal production techniques, in many countries, involve the use of potent hormones and antibiotics. Although there has been an on-going debate on the safety of the meat products produced from such practices, few concerns have been voiced concerning the possible ecological impacts of these practices. We have recently examined feral fish exposed to effluent released from animal feedlots, including urine and feces, into a natural river system. We observed that male fish exhibited many of the classical signs of androgen exposure, including reduced testicular mass, reduced plasma testosterone and altered head morphology (Orlando et al., submitted). With the extensive use of anabolic steroids in cattle production, the potential for wide scale disruption of fish reproduction is possible, since the presence of ethinyl estradiol in rivers after excretion by women and bacterial action in water treatment plants leads to endocrine disruption in fish. Experimental laboratory-based studies support our field observations, as low level exposure to the commonly used anabolic steroid trenbolone alters fish development and reproduction in a manner similar to that observed in the wild fish (Ankley et al. 2003). These and many more observations of wildlife demonstrate that global contamination of wildlife populations has dramatic effects on the health and reproductive potential of these populations. The phenotype we observe in individuals is produced by the environment acting on the genotype. The abnormalities we observe in wildlife are not due to classically held concepts of gene mutations. Instead, they represent alterations in the timing of gene expression and the level of gene expression. If exposure occurs during embryonic development, these alterations can be permanent (see Guillette et al. 1995). Wildlife has acted as sentinels for human health for centuries. An important issue is whether the abnormalities reported in wildlife provide a warning that human health and development are at risk. Studies with Laboratory Animals Numerous studies in laboratory animals have documented profound embryonic disruption by low level exposure to environmental chemicals including pesticides (herbicides, insecticides and fungicides), and those contained in a range of industrial products (e.g., phthalate additives in polyvinylchloride plastic and the monomer bisphenol A used in the manufacture of polycarbonate plastic) profoundly disrupt fetal development. Parmigiani and colleagues at the University of Parma administered the widely-used insecticide methoxychlor to pregnant mice. The offspring were examined for neurochemical changes in the dopaminergic system in the basal ganglia area of the brain. Neurons that use dopamine as a neurotransmitter (the dopaminergic neural system) are involved in the control of locomotor activity and exploration. The basal ganglia was studied because one of the major impacts of methoxychlor on behavior is to increase exploratory activity to novel stimuli (Parmigiani et al. 1998, Palanza et al. 2002). This is also the area of the brain where degeneration occurs in Parkinson’s disease, as well as associated changes in behavior. A change in behavior was associated, particularly in females, with a decrease in dopamine receptors in the basal ganglia. Males exposed to methoxychlor also showed an increase in territorial behavior, which is associated with aggressiveness. These findings show that permanent changes in brain function and behavior are associated with very low levels of exposure to this pesticide, levels
114
previously considered to be completely safe (P. Palanza, F. Morellini and S. Parmigiani, unpublished observation). Since 1997, a large number of peer-reviewed journal articles have been published showing that bisphenol A causes harm in animals at levels to which the average human is exposed. Bisphenol A is another chemical that, similar to methoxychlor, has the ability to bind to estrogen receptors and initiate cellular responses similar to those caused by estradiol. However, bisphenol A was incorrectly initially thought to only be a very weak estrogen-mimicking chemical. Instead, recent experiments have shown that at “low doses” that had previously been predicted to be safe based on models, not data, bisphenol A has dramatic adverse effects. Recent findings include chromosomal damage in developing oocytes in mouse ovaries, and abnormalities in the entire reproductive system in male mice, including a decrease in testicular sperm production and a decrease in fertility. In addition, fetal exposure to bisphenol A increases the rate of postnatal growth and decreases the age at which females mature sexually (go through puberty). These females also have mammary gland abnormalities and appear pre-cancerous by the time the females reach young adulthood. Bisphenol A also causes abnormal brain development, and changes in brain function and behavior, similar to methoxychlor. ASSESSING RISKS POSED BY EDCS TO HUMAN HEALTH It is likely that EDCs pose a significant threat to human health that classical epidemiological methods may not have the sensitivity to detect. Human studies on EDCs, which fall under the broader heading of environmental epidemiology, share many features with studies of environmental exposures such as to radon or total suspended particulates, which are not endocrine disruptors. However, they differ from non-EDC studies in several important ways (study hypothesis, exposure(s), effect(s), model selection, analysis and interpretation) that make detection of effects more difficult. We will consider these points and then examine a recent study that circumvents at least some of these problems. What triggers an investigation between an EDC and a human health effect? Traditional (“classical”) epidemiological studies were often designed to investigate unusual patterns of human health outcomes. Perhaps the most dramatic of these was the investigation of diethylstilbestrol (DES) in response to a cluster of seven cases of a rare vaginal cancer (clear cell adenocarcinoma) in young women. Similarly, an awareness of increasing rates of lung cancer triggered the first studies of smoking and lung cancer. Some epidemiological studies of EDCs have similar origins. Indeed, DES itself is a quintessential EDC, and current research into possible EDC involvement in breast cancer causation and fertility impairment have been provoked by observations of human trends. Many epidemiological questions raised by EDCs have their origins, however, in observations of impacts on laboratory animals and wildlife. These include the possible role of EDCs in increases in hypospadias, the effects of phthalates on male fertility, and the impact of polybrominated diphenyl ethers on neurocognitive development. In each of these cases, and many more, pronounced laboratory and field effects provoke questions about human impacts based on animal observations. All else being equal, the ability of an epidemiological study to identify the cause of an adverse outcome decreases as the prevalence of the outcome and the number of causal
115
factors increase. For example, the identification of DES as the cause of clear cell vaghal adenocarcinoma in young women was relatively easy because very few cases of this rare cancer had ever been documented in this age group, and no other cause had ever (before or since) been identified. Conversely, causes of breast cancer are notoriously hard to find not only because it is a complex, multifactorial disease but because of its extremely high lifetime incidence (one in eight women). The metaphor of signal detection may be helpful in clarifying this point; high background levels of a disease contribute to background “noise” (as do alternative causes, errors in exposure identification and diagnoses) and make detection of the “signal” (the association under investigation) difficult to identify. Epidemiology handles diseases of low incidence, and strong associations well, but multifactorial diseases of high incidence only poorly. Consider the following thought experiment (Figure 3). Imagine a population of 5,000 women with a significant but not unusual spontaneous miscarriage rate of lo%, normally distributed. In that hypothetical population one would expect 500 miscarriages, M2. In this experiment, expose 1% of the women to a contaminant that increases the risk of that abortion, on average by X-fold, with X increasing from 1 (no effect on risk) to a 10-fold increase. Elevation in risk would have to be more than nine-fold before the signal of exposure-induced miscarriage rose above background noise.
Elevation in risk (-fold) Figure 3.: The expected number of miscarriages in a population of 5000 women as a function of risk elevated by exposure to a hypothetical contaminant. See textfor parameters. A crucial feature of EDCs is their ‘‘stealth” nature. Several recent studies have demonstrated that the general population has been exposed to, and currently carries measurable levels of, tens to hundreds of EDCs (CDC 2003, Thornton et al. 2003). The subject has no knowledge of these exposures; so the classical tools of epidemiologists (questionnaires, vital records, occupational histories etc.) provide no information. These presuppose the subject’s (or physician’s or employer’s) knowledge of exposure. Instead, it is necessary to obtain biological measures of exposure (biomarkers). Biomarker studies require that subjects agree to provide a biological sample (e.g. blood, urine or saliva) and give permission for its use in such a study. Obtaining subjects willing to do this, Institutional Review Boards willing to approve these protocols, and
116
funding for such studies is becoming increasingly challenging. An increasing number of studies are taking this approach; we describe a recent example below. But for many EDCs, the analytical chemistry that would permit body burden measurements has not yet been developed, and for many for which it has, the chemical analyses are very costly, limiting sample size and thus statistical power. Moreover, the rapid metabolic degradation of some compounds means that single exposure measurements, for example from cord blood at birth, may completely miss critical exposures during pregnancy. Whether distinguishing between exposure and non-exposure in cases and controls or estimating changes in risk as a function of increasing exposure, epidemiological studies traditionally assume monotonic dose response curves. Higher exposure levels are assumed to produce larger effects. Laboratory work with EDCs clearly shows, however, that non-monotonic curves are commonly found. The result of use of inappropriate models, such as those that assume monotonicity of dose response, and absence of low dose effects, will result in “false negatives”. Epidemiology regularly compares exposed and unexposed populations. Yet the global distribution of EDCs means that finding unexposed populations is virtually impossible. Classical epidemiological studies were designed primarily to examine isolated exposures, ignoring concurrent exposures or considering them as confounding factors to be treated as “nuisance variables.” This is inappropriate with EDCs, however, for two reasons. First, EDCs from similar and different chemical families can work through the same mechanism. They are thus substitutable. Unless the possibility of substitution is factored into the study by measuring multiple exposures and examining their joint risk, such mixtures will increase misclassification of exposure (an important source of conservative bias) and thus increase the likelihood of false negatives. When mixtures of EDCs have been studied they have been seen to interact, often in unpredictable ways, with subadditive, additive and even synergistic effects. It is difficult, if not impossible, to isolate exposure to a singe pesticide, phenol or phthalate. As the EWG-Mt. Sinai study showed (Thorntonn 2003), it is likely that all subjects are exposed to measurable amounts of large numbers of these chemicals, many of which act along common pathways. These factors pose a significant and currently unsolved challenge to epidemiology. Long time lags between exposure and effect, which may span decades or even generations as in the case of DES, will further complicate detection of impacts. For nonpersistent compounds, all traces of the parent compound and its metabolites will be likely to have disappeared. With persistent contaminants, degradation of the parent compound into different metabolites, some toxic, some not, and some working via different mechanisms (e.g., DDT is estrogenic while its metabolite DDE is antiandrogenic) will further complicate interpretation even in cases where the study has measured biomarkers of exposure. Aside from ecological studies, epidemiology is conducted at the individual level. Effects of classical exposures are usually binary outcomes in individuals, which are well defined and severe (cancer case vs. non-case, birth with limb reduction or not). However, wildlife data suggests that changes from EDC exposure at the level of the individual are often subtle and difficult to classify (reduced fertility, poor semen quality, more feminine play behavior, genital dysmorphology). The effect of such changes at the population level, however, can be profound. As discussed above, trends in mean values of several
117
outcomes have been reported but other changes at the population level, which may be even more profound, are increases in population variance and an increasingly non-normal (non-Gaussian) population distribution. While EDCs manifestly present challenges to epidemiological studies, and are likely to have led to false negatives and underestimates of true risk, some progress is being made in developing approaches that acknowledge these pitfalls and employ methods explicitly designed to avoid them. One of us (Swan) has been involved in such a study, investigating reduced semen quality in relation to pesticide exposure. This study is somewhat unusual from an EDC perspective because it focuses on what appear to be adult-mediated impacts rather than developmental impacts. This avoids the problem of the long time lags noted above. A studv of semen gualitv in relation to Desticide exDosure After finding that fertile men from the general population of an agrarian area (Columbia, MO) had decreased semen quality (for example. only 58% of the number of moving sperm as men from Minneapolis, MN) (Swan 2003A), pesticide exposure was examined as a cause of poor semen quality (Swan 2003B). The authors measured urinary metabolites of eight non-persistent, current-use pesticides in two groups of men from mid-Missouri; men with all semen parameters (concentration, % normal morphology and % motile) below median value (cases) and men in whom all semen parameters were within normal limits (controls). Pesticide metabolite levels were particularly elevated in cases compared to controls for the herbicides alachlor and atrazine, and for the insecticide diazinon (2-isopropoxy-4-methy1-pyrimidino1, or IMPY) (p-values for Wilcoxon rank test = 0.0007, 0.012, and 0.0004, for alachlor, atrazine and IMPY, respectively). Men with higher levels of alachlor or IMPY were significantly more likely to be cases than men with low levels (OR=30.0, 16.7 for alachlor and IMPY, respectively), as were men with atrazine over the LOD (OR=l1.3). The number of pesticides found in the urine at elevated levels was significantly related to the risk of poor semen quality (being a case rather than a control). These associations were seen in the general population, who were not occupationally exposed. The three pesticides most strongly associated with semen quality are among the five that have been measured most frequently in drinking water sources in the mid-West. These are not removed by routine water treatment. Therefore, drinking water is the most plausible route of exposure. These findings suggest that adult exposure to several widely-used pesticides via drinking water is a likely cause of the reduced semen quality seen in fertile men from mid-Missouri. Subject responses to questions about home and occupational pesticide use were not related to semen quality, suggesting that the relevant pesticide exposure was unknown to the subject. Therefore, collection of urine samples and assays for pesticide metabolites in the subject’s urine using highly sensitive GCMS were required to document exposure to the low levels of pesticides that were related to semen quality. In addition, effects were seen at the level of the individual, with likely more profound effects at the population level. The average decrease in sperm concentration in fertile men living in mid-Missouri relative to men living in Minneapolis, MO is 40 million s p e d m l . While the median sperm concentration for Missouri men (54 milliodml) was within normal limits, the sperm count for about 40% of these men fell below 40 milliodml, the point at which fertility declines significantly (Bonde et al. 1998).
118
SUMMARY
In this paper we have outlined evidence from a diversity of sources indicating that a variety of manmade compounds can interfere with sexual and brain development, resulting in reduced fertility, altered brain function and behaviour in wildlife, laboratory animals and humans. Four summary points emerge: Contaminants at low levels can interfere with gene expression. Wildlife, laboratory animal and human effects are strongly concordant. The available data are not consistent with several key assumptions traditionally used to guide regulatory science and regulations. Traditional epidemiology will have great difficulty establishing causation of effects of these chemicals in humans. IMPLICATIONS The concordance of animal and human data, where the latter are available, indicates that when human data are not available health standards should be guided by animal research on a precautionary basis. It will be decades, at best, before epidemiological science is capable of thoroughly documenting the health impacts of even a small number of the contaminants to which humans are exposed daily. Zichichi (1993) has pointed out that many decisions are made about technology without an adequate scientific basis on which to assess costs and benefits. Endocrine disruption clearly fits that model, with toxicogical data on risk emerging decades after exposures began. The regulatory system can and should serve public health more effectively. Many of the chemicals of concern were produced to improve human welfare and provide economic benefit (for example, to increase crop production or to protect food from metal in food cans). This new science, however, is now revealing many unexpected adverse consequences, resulting from the ability of very low levels of these compounds to interfere with gene activation. Most of the chemicals now implicated were subject to little, if any, rigorous testing. Many tested using criteria now known to miss important risks were found “safe” and allowed to enter the marketplace. Now we are discovering their “stealth” characteristics only long after widespread exposure has occurred. Because of their “stealth” nature, we are currently unprepared to detect the effects of EDCs or defend against them. Many are persistent; they cannot be removed; they are globally distributed through our atmosphere, our seas and wildlife. Others, while not persistent, should be treated as persistent because of their chronic and ubiquitous use. They act at a population level and many have the potential to (individually or cumulatively) affect future generations, for example by decreasing fertility, feminizing males or reducing intelligence. All these endpoints have been produced in the laboratory and many have been observed in wildlife. New data - which must be confirmed by further study - suggest that comparable changes are being produced in human populations as well. Precaution dictates that we cannot wait for “conclusive” evidence of harm to human populations to take action.
119
Chemical corporations and government agencies charged with regulating chemicals in the environment (air, soil, water and food), assure the public that these chemicals are safe. Because of absence of data concerning risk (often confused with evidence of an absence of risk) and the use of conservative models no longer supported by recent data, the public remains ignorant of the risk potential of the vast majority of chemicals. The public is routinely informed that these chemicals have been tested, that there are studies demonstrating the absence of their risk, and that regulatory agencies adequately protect public health. Clearly significant changes are needed to bring current regulatory practices into conformity with new scientific information. We propose that testing for health effects at doses within the range of human exposure (currently not done) with respect to longlatency effects of developmental exposure throughout the lifespan (currently not done) be required prior to the introduction of any chemical intended for use in commerce.
REFERENCES 1.
Ankley, G. T., Jensen, K. M., Makynen, E. A., Kahl, M. D., Korte, J. J., Homung, M. W., Henry, T. R., Denney, J. S., Leino, R. L., Wilson, V. S., Cardon, M. C., Hartig, P. E. and Gray, L. E., Jr. (2003). Effects of the androgenic growth promoter 170-benbolone on fecundity and reproductive endocrinology of the fathead minnow. Environ. Toxicol. Chem. 22.
2.
Bonde, J.P.E., EEmst, E. Jensen, T.K. Hjollund, N.H.I., Kolstad, H., Henriksen, T.B., Scheike, T., Giwercman, A. Olsen, J., and Skakkebaek, N.E. 1998. Relation between semen quality and fertility: a population-based study of 430 first-pregnancy planners. Lancet 352:1172-1177.
3.
Centers for Disease Control and Prevention. 2003. Second National Report on Human Exposure to Environmental Chemicals. NCEH Pub. No. 02-0716.
4.
Davis, W. P. and Bortone, S. A. (1992). Effects of krafi mill effluent on the sexuality of fishes: An environmental early warning? In Chemically-induced Alterations in Sexual and Functional Development: The WildlifeMuman Connection, (T. Colbom and C. Clement, eds). p. 113-127. Princeton: Princeton Sci. Publ. Co., Inc.
5.
Colbom, T. and Clement, C. editors. (1 992). Chemically-induced Alterations in Sexual and Functional Development: The Wildlifernuman Connection. Ad. Mod. Environ. Toxicol., pp. 403. Edited by M. A. Mehlman. Princeton: Princeton Sci. Publ. Co. Inc.
6.
Colbom, T, Dumanoski, D. and Myers, J.P. 1996. Our Stolen Future. Dutton, New York.
7.
Fox, J.E., Starcevic, M, Kow, K.Y., Burow, M.E., and McLachlan, J.A. 2001. Nitrogen fixation: Endocrine disrupters and flavonoid signalling. Nature 413: 128-129.
8.
Guillette, L. J., Jr. and Crain, D. A. editors. (2000). Endocrine Disrupting Contaminants: An Evolutionary Perspective, pp. 355. Philadelphia: Taylor and Francis, Inc.
9.
Guillette, L. J., Jr., Crain, D. A,, Rooney, A. A. and Pickford, D. B. (1995). Organization versus activation: The role of endocrine-disrupting contaminants (EDCs) during embryonic development in wildlife. Environ. Health Perspec. 103 (Suppl. 7), 157-164.
10.
Guillette, L. J., Jr., Crain, D. A., Gunderson, M., Kook, S., Milnes, M. R., Orlando, E. F.,
120 Rooney, A. A. and Woodward, A. R. (2000). Alligators and endocrine disrupting contaminants: A current perspective. Amer. Zool. 40,438-452. 11.
Guillette, L. J., Jr. and Gunderson, M. P. (2001). Alterations in the development of the reproductive and endocrine systems of wildlife exposed to endocrine disrupting contaminants. Reproduction 122,857-864.
12.
Gupta, C. (2000). "Reproductive malformation of the male offspring following maternal exposure to estrogenic chemicals." Proc. Sac. Exo. Biol. Med. 224: 61-68.
13.
Hayes, T.B., Collins, A., Lee, M., Mendoza, M., Noriega, N., Stuart, A.A., and Vonk, A. 2002. Hermaphroditic, demasculinized frogs after exposure to the herbicide, atrazine, at low ecologically relevant doses. Proceedings of the National Academy of Sciences (US) 99:54765480.
14.
Kaltreider, R.C., Davis, A.M., Lariviere, J.P. and Hamilton, J.W. 2001. Arsenic Alters the Function of the Glucocorticoid Receptor as a Transcription Factor. Environmental Health Perspectives 109:245-251.
15.
Kolpin, D.W., Furlong, E.T., Meyer, M.T., Thurman, E.M., Zaugg, S.D., Barber, L.B., and Buxton, H.T., 2002, Pharmaceuticals, hormones, and other organic wastewater contaminants in U S . streams, 1999-2000: a national reconnaissance: Environmental Science and Technology, v. 36, p. 1202-1211.
16.
Koopman-Esseboom, C., Weisglas-Kuperus, N., de Ridder, M.A.J., Van der Paauw, C.G., Tuinstra, L.G.M., and Sauer, P.J.J. 1996. Effects of Polychlorinated BiphenylElioxin Exposure and Feeding Type on Infants' Mental and Psychomotor Development. Pediatrics 97(5): 700-706.
17.
Matter, J. M., Crain, D. A., Sills-McMuny, C., Pickford, D. B., Rainwater, T. R., Reynolds, K. D., Rooney, A. A,, Dickerson, R. L. and Guillette, L. J., Jr. (1998). Effects of endocrinedisrupting contaminants in reptiles: Alligators. In Principles and Processes for Evaluating Endocrine Disruption in Wildlife, (R. Kendall, R. Dickerson, J. Giesy and W. Suk, eds). p. 267-289. Pensacola, FL: SETAC Pr.
18.
McLachlan, J. A. (2001). Environmental Signaling: What embryos and evolution teach us about endocrine disrupting chemicals. Endocrine Rev. 22,319-341.
19.
McMaster, M. E., Munkitbick, K. R., Van Der Kraak, G. J., Flett, P. L. and Servos, M. R. (1996). Detection of steroid hormone disruptions associated with pulp mill effluent using artificial exposures of goldfish. In Environmental Fate and Effects of Pulp and Paper Mill Effluents, (M. R. Servos, K. R. Munkittrick, 3. H. Carey and G. J. Van Der Kraak, eds). p. 425-437. Delray Beach, FL: St. Lucie Press.
20.
Munkitbick, K. R., Portt, C. B., Van Der Kraak, G. J., Smith, I. R. and Rokosh, D. A. (1991). Impact of bleached kraft mill effluent on population characteristics, liver MFO activity, and serum steroids of the Lake Superior white sucker (Catostomus commersoni) population. Can J Fish Aquat Sci 48, 1-10,
21.
Myers, J.P. 2003. http://www.OurStolenFuture.org
22.
Orlando, E. F., Davis, W. and Guillette, L. J., Jr. (2002). Aromatase activity in the ovary and brain of the eastern mosquitofish (Garnbusia holbrooki) exposed to paper mill effluent. Environ. Health Perspec. 110 (Suppl. 3), 429-433.
23.
Orlando, E. F., Kolok, A., Binzcik, G., Gates, J., Horton, M., Lambright, C., L.E.Gray and Guillette, L. J., Jr. (2003). Endocrine disrupting effects of cattle feedlot effluent on an aquatic sentinel species, the fathead minnow. Environ. Health Perspec. submitted.
24.
Palanza, P., Morellini, F., Parmigiani, S. and vom Saal, F. (2002). Ethological methods to assess the impact of estrogenic endocrine disruptors on behavior: a study with methoxychlor. Neurotoxicol. Teratol. 24:56-67.
121 25.
Parmigiani, S., Palanza, P. and vom Saal, F. S. (1998). Ethotoxicology: an evolutionary approach to the study of environmental endocrine-disrupting chemicals. Toxicol. Ind. Health 14:333-339.
26.
Richter, C. A,, R. L. Ruhlen, W. V. Welshons and F. S. vom Saal(2003). "Androgen receptor mRNA is upregulated by estrogen in mouse prostate primary cell culture." Toxicological Sciences 72(S-1): 238.
27.
Sumpter, J. P. and Jobling, S. (1995). Vitellogenesis as a biomarker for estrogenic contamination of the aquatic environment. Environmental Health Perspectives 103 (Suppl. 7), 173-178.
28.
Swan, S.H., Brazil, C., Brobnis, E.Z. Liu, F. Kruse, R.L. Hatch, M, Redmon, J.B., Wang, C., Overstreet, J.W. and the Study for Future Families Research Group. 2003a. Geographic differences in semen quality of fertile US males. Environmental Health Perspectives 11 1: 414-420.
29.
Swan, S.H., Kruse, R.L., Fan, L., Barr, D.B., Drobnis, E.Z., Redmon, J.B., Wang, C, Brazil, C and Overstreet, J.W., and the Study for the Future of Families Research Group. 2003b. Semen quality in relation to biomarkers of pesticide exposure. Environmental Health Perspectives 111: 1478-1484.
30.
Thomton, J.W, McCally M. and Houlihan, J. (2003). Biomonitoring of industrial pollutants: health and body implications of the chemical body burden. Public Health Reports 117:315323.
31.
Toft, G., Edwards, T., Baatrup, E. and Guillette, L. J., Jr. (2003). Disturbed sexual characteristics in male mosquitofish (Gambusia holbrooki) from a lake contaminated with endocrine disrupters. Environ. Health Perspec. 11 1,695-701.
32.
Tyler, C.R., and Routledge, E. (1998). Oestrogenic effects in fish in English rivers with evidence for their causation. Pure and Applied Chemistry 70 (9):1795-1804.
33.
Vreugdenhil, H.J.I., Slijper, F.M.E, Mulder, P.G.H. and Weisglas-Kuperus, N. 2002. Effects of Perinatal Exposure to PCBs and Dioxins on Play Behavior in Dutch Children at School Age. Environmental Health Perspectives 110:A593-A598.
34.
Weisglas-Kuperus, N., Patandin, S., Berbers, G.A.M., Sas, T.C.J., Mulder, P.G.H., Sauer, , and Hooijkaas, H. 2000. Immunologic Effects of Background Exposure to Polychlorinated Biphenyls and Dioxins in Dutch Preschool Children. Environmental Health Perspectives 108:1203-l207.
35.
Welshons, W.V., Thayer, K.A., Judy, B.M., Taylor, J.A., Curran, E.M. and vorn Saal, F.S. 2003. Large effects from small exposures. I. Mechanisms for endocrine disrupting chemicals with estrogenic activity. Environmental Health Perspectives 1 11:994-1006.
36.
White, R., Jobling, S., Hoare, S. A,, Sumpter, J. P. and Parker, M. G. (1994). Environmentally persistent alkylphenolic compounds are estrogenic. Endocrinology 135, 175-182.
37.
Zichichi, A. 1993. Scienza ed emergenze planetarie. Biblioteca Universale Rizzoli.
This page intentionally left blank
4.
POLLUTION: LONG-TERM STEWARDSHIP OF HAZARDOUS MATERIAL
This page intentionally left blank
CONTAINMENT OF LEGACY WASTES DURING STEWARDSHIP JAMES H. CLARKE Vanderbilt University, Nashville, USA LORNE G. EVERETT Stone and Webster Management Consultants, Inc., Santa Barbara, USA STEPHEN J. KOWALL Idaho National Engineering and Environmental Laboratory, Idaho Falls, USA LEGACY WASTE ISSUES Past waste management practices at industrial and governmental sites have resulted in the need to manage very large volumes of contaminated soil and buried wastes. In many cases, however, treatment technologies to destroy the hazardous and radioactive constituents are simply not available, nor are they expected for metals and pseudo-metals. The U.S. Department of Energy has estimated that it will not be possible to restore the environment, at over 100 of their former nuclear weapons production facilities, to a degree that would allow unrestricted access. Consequently, containment and control technologies will be employed to isolate the contaminants and prevent migration to potential receptors. The time during which the residual contaminants could pose a threat to human health and the environment may be very long (100s to 1000s of years). The use of engineered barriers for long term isolation in the United States is complicated by several factors, not the least of which are: (1) design standards are prescriptive and embodied in the Federal environmental regulations; alternative designs must be shown to be "equivalent", and (2) our performance experience with the currently favored designs, while good and encouraging for the most part, is still only a few decades at best. The need for effective long-term isolation poses several challenges. The robustness of the containment system itself is, of course, of concern. However, it is unrealistic to expect even the very best approaches to endure for long times without maintenance and, eventually, intervention. Consequently, long-term system performance also requires effective approaches to long-term monitoring, maintenance and institutional controls. A total system approach is needed that integrates monitoring and other requirements with the engineered system itself to ensure long-term performance. THE EVOLUTION OF CONTAINMENT TECHNOLOGY AND DESIGN APPROACHES Containment technologies have historically been used to provide in-situ isolation of existing contaminated soils and buried wastes through the use of surface barriers (covers) and subsurface bamers (walls and floors) and to provide engineered containment (landfills, disposal cells) for the isolation of waste materials and contaminated media that are placed into these systems. In-situ installation of subsurface barriers poses additional challenges over those seen in new facilities that are built with the subsurface barriers in
125
1 2 6 place before the waste materials are received. Also engineered landfills and disposal cells can be designed to accommodate leachate collection and monitoring, whereas these features must be retrofitted into in-situ containment systems. Land based surface contamination containment and control systems have evolved from rather simple systems employing, in many cases, a modest soil cover, designed to prevent rain water from contacting the waste and transporting the mobile constituents to deeper soils and groundwater, to very sophisticated multi-function systems featuring the use of synthetic as well as natural materials. The major system performance indicator has been, and most probably will continue to be, the effective hydraulic conductivity of the primary barrier - the function of which is to prevent the passage of water into the contaminated material that is being isolated. This barrier may be a major component of a cover or cap that is placed over the material, or a subsurface barrier that may be needed to manage the potential contact of ground water with the contaminated materials and wastes. As experience was gained concerning system performance and as knowledge increased with respect to the major natural processes affecting performance, cover system designs became increasingly complex. Synthetic materials believed to be durable, under expected environmental conditions, for at least a few hundred years were added, sometimes in place of, at other times in addition to, and more recently in combination with the natural soil barrier (e.g. the geosynthetic clay liner). Cover systems were modified to incorporate additional layers above the primary barrier to prevent erosion and biointrusion and to provide drainage of any infiltrating rainwater away from the primary barrier. Further design considerations were also needed to minimize the effects of freeze-thaw cycles, seismic events and bamerlwaste and groundwater incompatibility. Consequently, the engineered cover that is needed to comply with the prescriptive regulatory design requirements has evolved from a relatively simple cap, consisting of a few centimeters of compacted natural soils, into a very expensive, multi-layer, multicomponent system, several meters high, requiring extensive construction quality assurance and quality control measures. Subsurface barriers have evolved, and knowledge and experience have been gained. THE NEED TO FORECAST AND ACCOMMODATE ENVIRONMENTAL CHANGE Implicit in the historical approach to engineered barrier design has been the belief that we can design and construct an isolation system that is robust and protective over long periods of time with minimal monitoring and maintenance requirements. The systems that have evolved, however, require that natural processes and environmental changes be resisted in order for the system to continue to perform effectively. While this is certainly manageable over relatively short time horizons (10s of years), longer time horizons (100s to 1000s of years) pose major challenges from the standpoint of not only available resources but in fact our commitment to ensure that the needed monitoring and maintenance continue. Recently there has been a great deal of interest in the use of alternative designs that work with (as opposed to against) natural processes and that incorporate our best forecasts of future environmental conditions, e.g. climate change and ecological
127
succession. Ideally, we could construct a containment system that would require no maintenance and that would perform well even after being totally overtaken by its surroundings (i.e. design for the equilibrium state). Alternative cover systems that show great promise, especially in arid and semienvironments, feature the use of vegetation together with natural soils in a system that removes potentially infiltrating rain water through evapo-transpiration during the growing season and stores rain water above the waste (for future removal) during the non growing seasons. A capillary bamer design can also be combined with the evapo-transpiration approach. Performance data for both of these approaches (and a third as well - the anisotrophic design) are very promising and these systems can be provided at significantly reduced costs compared to the prescriptive design required by the regulations. More humid environments continue to pose a challenge to alternative designs. CONTAINMENT SYSTEM MONITORING APPROACHES Historically, containment system performance verification approaches have also been regulatory-driven and have focused almost exclusively on groundwater monitoring. While this approach might be sufficiently protective for relatively small containment systems close to the water table, groundwater monitoring alone lacks the ability to provide an early warning of a potential release. Rather, information is gained only after system failure has occurred. At this time there are no federal environmental requirements for in-system monitoring of the cover nor for monitoring of the vadose zone below the containment system. Both in-system and vadose zone monitoring may have a great deal of merit, however, and should be considered. While federal regulations are nonexistent, some states have extended monitoring requirements beyond ground water monitoring. Several approaches and technologies are available to provide both monitoring of the cover (moisture content, subsidence) and the underlying vadose zone (various types of lysimeters). In-system monitors for covers are being evaluated in a few cases and have recently been retrofitted into one existing system at the DOE facility in Fernald, Ohio. The lack of a regulatory driver, however, has clearly been an important deterrent to the use of these other monitoring approaches, especially at private sector sites. A SCIENCE AND TECHNOLOGY ROADMAF' FOR LONG-TERM STEWARDSHIP OF LEGACY WASTE SITES Recently, the Idaho National Engineering and Environmental Laboratory, with support from the Department of Energy Office of Environmental Management, developed a "Science and Technology Roadmap for Long-Term Stewardship (S&T Roadmap)". Participants in the project included scientists and engineers, with a diverse array of expertise and experience in contaminated site investigation and remediation, representing government, industry and academia. An overarching conclusion was that long-term stewardship (LTS) is best provided through a system approach. The major system components are the engineered containment subsystem employed to provide
128
contamination isolation and control, monitoring of the containment subsystem and the environment, communication within and beyond the LTS system and management of the LTS system. From a containment and monitoring perspective, this approach identified the following needed capabilities and capability enhancements: SYSTEM FUNCTION - CONTAINMENT OF RESIDUAL HAZARDS Key Capability 1. Site Conceptualization and Modeling Tools to improve geologic-hydrologic-biological-chemical-thermal (GHBCT) conceptual modeling for long term forecasting; provide tools for long-term forecasting of environmental conditions relevant to predicted end-states; provide tools for modeling the community at risk; conceptualize and predict containment/control system performance, including potential failure modes and levels of failure. Key Capability
- Improved Contamination Containment and Control Systems
engineer the GHBCT environment to limit contaminant toxicity and mobility; design, build and operate alternative (next generation) containment and control systems. SYSTEM FUNCTION - MONITOR THE SITE AND THE LTS SYSTEM Key Capability - Sensors and Sensor Systems for Site Monitoring Identify contaminant monitoring needs for all media of potential transport or exposure and fill sensor technology gaps where monitoring solutions are needed; Establish site-specific parameters for environmental exposure routes and for both occupational (on-site) and non-occupational (community at risk) human routes of exposure; Improve sensors and sensor systems for monitoring active and passive safety systems. The roadmap project identified capabilities and capability enhancements that are necessary for the communication and management system functions as well. SUMMARY Our current challenges are: To build upon the work that has been done to contain residual contamination that must be isolated and left in place (at least for now), with the recognition that there is a need to balance the desire for a permanent system with low maintenance
129
needs and the realization that this may not be possible for very long time horizons, and To integrate monitoring capabilities with containment system design in a sitespecific environment so that we can decrease our need to rely on ground water monitoring as a means of containment system performance verification.
We are encouraged by the short-term performance that we are seeing with the alternative cover systems that are being demonstrated and installed. Ideally, we will move increasingly to engineered barrier systems that benefit from improved forecasting of those future environmental conditions critical to long-term performance and that better accommodate natural processes and environmental change. REFERENCES 1. Long-Term Institutional Management of US. Department of Energy Legacy
Waste Sites, National Research Council, National Academy Press, 2000. 2. Long-Term Stewardship, Science and Technology Roadmap (Draft), Idaho National Engineering and Environmental Laboratory, DOE/ID- 10926, October, 2002.
3. Performance and VeriJcation of Barriers Through Prediction and Monitoring, C. Chien, A.Gatchett and G. Chamberlain (eds), in press.
4. Handbook of Vadose Zone Characterization & Monitoring, L. G. Wilson, L. G. Everett, and S. J. Cullen, Lewis Publishers, 1995.
PUBLIC INVOLVEMENT AND COMMUNICATION IN THE LONG-TERM MANAGEMENT OF U.S. NUCLEAR WASTE SITES WILLIAM R. FREUDENBURG Dehlsen Professor of Environment and Society, Environmental Studies Program University of California, Santa Barbara, USA It is now clear to most observers that energetic public involvement efforts will need to be an integral component in the long-term management of the many sites in the U.S. that are now expected to require extremely long periods of institutional management, often extending centuries or millennia into the future. Not only are such time periods almost unprecedented in the history of fallible human institutions, but the difficulties of such efforts in the U.S. will be further amplified by the number, size and diversity of the contaminated sites, as well as by the enduring suspicions created by past failures. Although it is not clear whether or not these difficulties can be overcome, it is clear that they must be. Fortunately, recent findings in the scientific literature suggest that, if the challenges are taken quite seriously, well-designed and active collaboration with citizens who live near the contaminated sites may improve not just the sites’ relationships with socalled “stakeholders,” but also their near- and long-term management of risks to human health and the environment. In a remark that a recent report from the U.S. National Academy of Sciences/National Research Council (2000) called “one of the most prescient comments of the nuclear age,“ Alvin Weinberg (1972) noted, “We nuclear people have made a Faustian bargain with society. On the one hand, we offer, in the catalytic nuclear burner, an inexhaustible source of energy... but the price that we demand of society for this magical energy source is both a vigilance and a longevity of our social institutions that we are quite unaccustomed to.” As is now clear, Weinberg’s ”vigilance and longevity” will be needed not just in a vacuum, but in the real world, and a real world where there is much less deference to institutions than there was at the time of his original remarks. Perhaps partly for this reason, it is now relatively common to hear that one of the requirements for long-term institutional management of nuclear wastes and contaminated nuclear sites will be ongoing and active programs of public involvement. Such observations are certainly accurate, as far as they go, but experience already shows that the reality of institutional management is likely to prove more complex than is being foreseen by many, even today. Rather than attempting to summarize all of these complexities, this paper will focus on some of the key challenges that are particularly noteworthy in the U.S. context, emphasizing the main lessons and opportunities that have been identified in the independent analyses that have been done to date. THE U S . CONTEXT Perhaps the most important issue to be understood when considering the management of nuclear materials and sites in the U.S. is the issue of magnitude. In
130
131
comparison with most nations, the U.S. is a huge and diverse nation, and it has a huge and diverse range of sites. In terms of geographic area, just the 48 contiguous U.S. States stretch across a distance that is more than 3000 miles, or roughly 5000 kilometers, from coast to coast, and the 49th State of Alaska is itself large enough to double that distance. In terms of the existing magnitude of the problem, it needs to be recognized that, not only were nuclear weapons research and production activities carried out at an urgent pace -largely during an era when environmental safeguards were not as strong as they are today, and during times when the relevant officials were far more casual or optimistic about their ability to keep environmental problems within "acceptable" levels - but also that the activities were funded by Congress. Congressional funding wound up meaning that, inevitably, the political calculus of Senators and members of Congress was superimposed on top of technical and military considerations, leading to a proliferation of sites and to patterns of control and sequencing that future archeologists are likely to find utterly baffling. With one Senator after another insisting that "logic" (or at least political logic) required the next major facility to be built in a new state or Congressional district, the U.S. ultimately found itself with a number, range and diversity of contaminated sites that sometimes proves baffling even for present-day observers. To be more specific, according to one of the recent official counts - one that was used by the National Academy of SciencesINational Research Council (NAS/NRC), as taken from the U.S. Department of Energy, or DOE (the agency that actually held responsibility for most nuclear weapons researcWproduction activities) - there are 144 "waste sites" in the DOE weapons complex, and of these 144 sites, 109 at least have contamination problems that simply cannot be "cleaned up" to levels that permit normal public access, with currently known technologies, even with relatively high levels of funding. In the U.S., accordingly, there are over 100 sites in the weapons complex alone that will require precisely the kinds of extremely long-term "vigilance and longevity" noted by Weinberg. Each of those sites is to some degree unique, and this is all the more the case for the humans who live nearby. One of my own precepts, developed fkom having studied any number of human communities, is that when you have met one community, you have only met one community. In this context, there is thus a need to emphasize one of the most commonly offered of all warnings about public involvement programs, namely, that even though we often refer to "involving the public" (singular), there is a need to recognize that we will be dealing with "publics" (multiple), even at any one site, and all the more so across sites. More broadly, it also needs to be recognized that any overall observations being offered in this short paper are necessarily simplified ones. With these warnings in mind, however, it is possible to offer two main observations about the types of pressure that are likely to be represented by "public involvement" at many, if not most, of the sites in the United States. The first overall observation is that it is actually quite rare to encounter the specific form of pressure that is most often feared by engineers of my acquaintance, namely the pressure to make a site "perfectly safe." Most of the exceptions to this statement, moreover, come from the sites where existing contamination problems are sufficiently modest, and sufficiently contained, so that it might be at least in principle feasible to dig
132
up virtually all of the contaminated materials and ship them “someplace else”. At most of the other contaminated sites in the U.S. that I have visited, including all of the seriously contaminated ones, most of the nearby citizens who have been paying attention to the contamination problems are acutely aware of the magnitude of the problem (and of the likely costs of remediation), and many of them are just as bothered by the extremely high costs involved as are the best engineers. The second overall observation is that, as a useful simplification, public involvement at any given site is likely to include (at least) two main groups, and two main forms of pressure. One group will be focused largely on the ways in which they, and their region, can profit from ongoing activities at the site, particularly over the relatively short term (i.e., the next 2-20 years); the other will have greater concerns about how the site can be managed safely, minimizing risks the human health and the environment, over the medium to long-term (i.e., the next 10-10,000years). To continue with the pattern of useful simplifications, the first group tends to be relatively small at most sites, but to have had a long history of working profitably with DOE and the federal government, and to be relatively well-connected politically -sometimes including connections to the very Senators and members of Congress who are still given credit for having brought the site to that area andor for assuring that the site would not be overlooked in Congressional budget bills. Although this is by no means always the case, members of this first group are often key actors in lobbying for “reindustrialization”or the reuse of former weapons-making facilities for other purposes, particularly at certain key sites, such as Oak Ridge in Tennessee. The second group, on the other hand, tends to be far larger, but perhaps understandably, also to be far more suspicious of DOE - and in another pattern that may also be understandable, the members of the two groups have often learned to be highly suspicious of one another, as well. This fact leads directly back to the earlier point about involving “the publics” (plural). As DOE and other U.S. agencies are likely to continue to learn in the future, it is quite unwise to assume that responding to the interests of any one group (especially one that is relatively small and unrepresentative) can safely be confused with responding to the broader public (publics), even at any given site. MAIN “LESSONS” TO DATE Based on independent analyses by groups such as the National Academy of Sciences/National Research Council (see for example NASNRC 2000; Dunlap et al. 1993) there are four main lessons that are worthy of attention in the context of this paper. The first is that maintaining effective public involvement with the second and larger group - those persons who do not have a long history of having worked profitably with DOE and with contaminated sites - is likely to be the larger and more important challenge for public involvement programs. The second is that success is likely to require more than simply “good manners,” or what an observer in another controversy called “sending a gorilla to charm school.” Even if a better-behaved “gorilla” made it more pleasant for him “to be in the same room,” as he noted, “the past behavior of the beast” made him wonder whether the improved manners reflected a genuine change, or merely something more cosmetic (see Freudenburg and Gramling 1984). The third lesson, by
133 contrast, is that the best way to maintain effective (and constructive) public involvement with the second and larger group is by dealing seriously with their concerns. The fourth and final lesson, fortunately, is that this very effort - that is, the. effort to deal seriously with the concerns of the broader public(s) - may well offer not just important challenges, but also important opportunities. Those opportunities are likely to prove important, in part because experience to date has been more than a little humbling. Despite millions of pages of official documents now in existence, a surprisingly high fraction of what we learned about contamination at sites now undergoing remediation has come from sources that are neither official nor documents, but rather from oral histories - often from low-level workers who simply happen to remember just what was buried where. This fact is scarcely comforting; human memories are fallible even under the best circumstances, and experience also shows that some of the people who might have been able to remember, at one time, may no longer be able to remember, and may no longer even be alive today. At the same time, however, this fact is helpful in another respect: it helps alert us to the importance of emphasizing new ways of combining "institutional instruments" including humans as well as hardware - that are not so sensitive to changing and sometimes politically derived incentives, such as the faithful reporting of the party line. Thus, just as it is important to develop new forms of (physical) instruments that can reliably indicate conditions and problems over very long periods of time, it is also important to search for institutional arrangements that offer greater promise of reliable and robust performance over the very long term. Based on the limited experience available to date (see for example the discussions in Chess et al. 1992; Clarke 1992; Shrader-Frechette 1993; NASINRC 2000), some of the most promising possibilities for improving organizational performance - not just so-called "risk communication," but actual risk management - are those that derive from what the technical literature sometimes calls increased "institutional permeability." In simpler terms, there appears to be great potential in taking greater steps to incorporate into management institutions some of the kinds of people who have often been excluded from those institutions in the past, specifically including those having strong interests in increased safety. (One of the prototypical examples involves local mothers of small children who live in the area, both now and in the future.) Just as it is important to do better at "designing with nature," in short, it is important to do better at "designing with human nature." In short, developing smarter ways of working with those who have often been critics of past management practices may well prove to be one of the most promising ways of improving our own performance in the future. REFERENCES 1.
Chess, Caron, A. Saville, Michal Tamuz et al. 1992. "The Organizational Links Between Risk Communication and Risk Managment: The Case of Sybron Chemicals Inc." Risk Analysis 12 (3): 431-38.
2.
Clarke, Lee 1992. "The Disqualification Heuristic: When do Organizations Misperceive Risk?" Presented at annual meeting of American Sociological
134
Association, Pittsburgh, August.
3.
Dunlap, Riley E., Michael E. Kraft and Eugene A. Rosa. 1993. Public Reactions to Nuclear Waste: Citizens' Views of Repository Siting, Durham, NC: Duke Univ. Press.
4.
Freudenburg, William R. and Robert Gramling. 1994. Oil in Troubled Waters: Perceptions, Politics, and the Battle over Offshore Oil. Albany: State University of New York ( S U N Y ) Press.
5.
National Research Council. 2000. Long-Term Institutional Management of US. Department of Energy Legacy Waste Sites. Washington, D.C.: National Academy Press, National Academy of Sciences.
6.
Shrader-Frechette, Kristin. 1993. "Risk Methodology and Institutional Bias." Research in Social Problems and Public Policy 5 : 207-223.
A EUROPEAN PERSPECTIVE ON STAKEHOLDER INVOLVEMENT IN NUCLEAR WASTE MANAGEMENT ALLAN G. DUNCAN NIREX Waste Management Advisory Committee Oxon, UK. THE NUCLEAR LEGACY IN EUROPE The countries of Europe include those that were involved in the earliest developments of nuclear technology in the 1940s and 1950s. Those countries now have a wide range of redundant plants and equipment associated originally with the development of nuclear power and, in some cases, with the development of nuclear weapons. They also have accumulations of the radioactive waste arising from operation and decommissioning of such plants and equipment. Originally, much of this waste was simply stored, and in situations that paid little heed to environmental issues or, indeed, to the need for its eventual disposal. In the case of nuclear weapons development in the immediate post-World War I1 years, of course, the contemporary priority was national defence and not environmental protection, which has been a major consideration for many years now. The ownership of this particular element of the legacy usually lies with national governments. These countries and some of the other countries in Europe have, since then, introduced commercial nuclear power programmes. These programmes have been subject to tight environmental regulation from the start but, nevertheless, have also resulted in the creation of wastes for which there is no current disposal route, other than for relatively low level wastes. This element of the nuclear legacy generally belongs to the relevant utilities but is progressively, throughout Europe, being transferred to national organisations created for its safe, long-term management. There are also countries that have no nuclear power programmes and whose radioactive wastes arise only from medical, industrial and research activities. It will be understood, therefore, that the extent and perception of the nuclear legacy issue is not uniform throughout Europe and, indeed, that different perceptions across national borders have become a most important issue in dealing with this legacy. EARLY ATTEMPTS TO DEAL WITH THE LEGACY With the progressive introduction of commercial nuclear power through the 1960s and 197Os, the safe management and disposal of nuclear waste became a major issue to the extent that in 1976 in the UK, for example, a Royal Commission for Environmental Protection recommended that no major new nuclear power programme should be undertaken before the matter of waste management and disposal was solved. Similar issues were being raised throughout the nuclear countries in Europe and, in the mid1970s, the European Commission initiated a major R&D programme on nuclear waste treatment and disposal, which continues to this day. At about the same time, individual countries were also planning national programmes for nuclear waste disposal.
135
1 3 6 Typically, these programmes were technically driven with little involvement of those outside the nuclear industry, government or the regulatory authorities. By 1991 the international technical community was confident enough to formulate an opinion, published by the Nuclear Energy Agency of OECD, which: “Confirmed that safety assessment methods are available today to evaluate adequately the potential long-term radiological impacts of a carefully designed radioactive waste disposal system on humans and the environment” and, “Considered that appropriate use of safety assessment methods, coupled with sufficient information from proposed sites, can provide the technical basis to decide whether specific disposal systems would offer society a satisfactory level of safety for both current and future generations.” However, by about the same time, it was obvious that this technical confidence was not even close to bringing about the siting, construction or operation of disposal facilities for nuclear wastes other than low-level wastes. National programmes were facing opposition similar to that which had effectively blocked disposal of certain European radioactive wastes to abyssal trenches in the Atlantic Ocean some ten years earlier. Not only was there public opposition to proposals for development of solid waste disposal facilities; the international community, by way of the OSPAR Commission, was also exerting an influence on waste management at source, through decisions about treatment of liquid discharges to sea for example. Thus, it eventually became clear that no effective progress towards waste disposal was going to be possible except by way of meaningful involvement of the non-technical stakeholders and the consensus of a substantial majority. THE LEGAL BACKGROUND All countries in Europe have some form of control over land-use planning. Member States of the European Union (EU), specifically, have been bound since 1988 by a Directive on Environmental Impact Assessment that requires assessment of the consequences of specific projects likely to have significant environmental effects. These include “installations solely designed for the permanent storage or final disposal of radioactive waste”. Assessment entails examination of factors such as impact on amenities, landscape, noise, transport, general nuisance and effects of accidents as well as the more specific issues of waste management and effects of pollution. Also, and most importantly, it requires involvement of the public. Notably however, it addresses specific project proposals after they have been formulated. In this context, it is an example of the “decide, announce and defend” philosophy that has been so demonstrably unsuccessful in the siting and construction of nuclear waste disposal facilities, large conventional waste incinerators and other such controversial projects. It is significant, now, that another Directive on Strategic Environmental Assessment will come into force in 2004. This Directive, on the other hand, addresses the need for environmental assessment of plans and programmes, that is to say before any specific project proposal is formulated. It also
137
delivers EU obligations under the Espoo and Aarhus conventions by requiring meaningful involvement of neighbouring countries as well as the domestic public. But what is “meaningful involvement”? And who are “the public”? STAKEHOLDER INVOLVEMENT AND DEVELOPMENT OF CONSENSUS With some admirable exceptions such as Finland and Sweden, where good agreement and progress on waste disposal has been achieved, it is only in recent years that most European countries have begun to come to terms, in the nuclear waste field, with the implications of the need for meaningful involvement of all interested parties, the so-called “stakeholders”, including the public. Substantial effort has been devoted to developing an understanding of how to involve stakeholders at an early stage and how to achieve consensus before major decisions or commitments are made. This work has been carried out at both national and international level. At national level, the utilities and regulatory bodies in Europe have developed arrangements for consulting stakeholders. Some utilities have formal stakeholder groups that are invited to advise on waste management issues before formulation of long-term waste management plans. Similar or equivalent arrangements have been made by some of the national organisations created specifically to assume responsibility for nuclear waste management. More significantly perhaps, some national governments have paused and taken a step back from the previously technically driven approach in order to consider how best to involve all interested parties and to arrive at a consensus on the way forward. France, for example, passed a law in 1991 that requires a period of information gathering and analysis before national decisions are made in 2006. The Federal Ministry of Environment in Germany has created a working group to develop a new procedure for selecting a waste repository site before restarting its waste disposal programme. In the UK, this step is even more fundamental. The Government is appointing a new independent body to review all the options for long-term management of waste, including long-term storage as well as various options for disposal. It is committed to engaging the public and stakeholder groups in debate and to advising Government on a way forward that is technically sound and that will command substantial support. Its first major task is likely to be establishing a framework for the debate and identifying the relevant stakeholders. In the international context, the Nuclear Energy Agency of the OECD has provided a particularly helpful forum for debate and has published the results of various workshops on the subject, often using the well-developed and apparently successful arrangements in Finland and Sweden as examples of good practice. Without going into details of the arrangements in these countries, the discussions around them, together with lessons from experience in other countries, have resulted in the identification of some key features of the process of “meaningful involvement” and establishing a stakeholder consensus. These are broadly as follows:
138
The process must be open, transparent, fair and truly participatory. It should involve step-wise decision making, with clear defmition of the steps or stages, and the steps should be reversible in the light of new knowledge, so far as practicable. It should be clearly understood what is expected at each step, and how facts, expert opinions and value judgements will interact in decision making. The responsibilities of each stakeholder for each step should be defined and accepted by all. The procedures should ensure that all stakeholders, including the public, can participate effectively in regard to validating claims for trust, legitimacy and authenticity. Against this background it was also noted that important factors in achieving success in Finland and Sweden include: Support and effective involvement from national governments. Commitment to national self-sufficiency in waste management and no importation of foreign waste. Early involvement of local communitieshaving a veto power. Early involvement of competent, unbiased and respected regulatory bodies. Well-structured dialogue between operators, regulators, political decisionmakers and the general public. Absence of military waste from the inventory to be managed. Decoupling of consideration of waste management from consideration of policy on future use of nuclear power, or nuclear weapons. These features are now widely recognised in Europe as self-evident, and the focus is moving towards identification of the relevant stakeholders and to understanding their various points of view and how to reflect them in a consensus. THE STAKEHOLDERS AND THEIR POINTS OF VIEW The relevant stakeholders may vary from country to country but, in general, they include operators, regulators, politicians and government policy-makers, the general public, and neighbouring countries that might be affected by waste management arrangements. The roles and points of view of policy-makers, operators and regulators are relatively clear and straightforward given their respective responsibilities for converting political intent into policy and legislation, for implementing it and for securing compliance with it, together with individual and collective responsibility for providing accurate, unbiased information to the public. The “public” or, more specifically, “the public concerned” is effectively defined in the Aarhus Convention as “the public affected, or likely to be affected by, or having an interest in the environmental decision making. For the purpose of this definition, NonGovernmental Organisations (NGOs) promoting environmental protection and meeting any requirements under national law shall be deemed to have an interest.” This is helpful
139
in focusing arrangements for stakeholder involvement and debate and for understanding their points of view. The public “affected” will certainly include the people in whose communities the waste is currently being processed or stored and who, as a result of recent events, will be increasingly aware of the hazards of having waste accumulations on or above ground in their neighbourhood. The public “likely to be affected” will include those whose communities may become the location of future waste processing, storage or disposal facilities. In both cases, the points of view are likely to be a relatively transparent mix of legitimate concerns about safety, environmental impact, employment and educational opportunities, property values, infrastructure development, etc. The public “having an interest” is harder to identify and it is not clear what points of view may emerge here. The NGOs, whether meeting requirements under national law or not, are likely to have substantial influence. Their campaigning material is generally convenient and attractive to the press and broadcast media. Hence, it has a substantial influence on the voting public and, therefore, on political attitudes. Their involvement in attempts to solve issues of waste management may, in some cases, be more complex. Those NGOs committed to phasing out nuclear power, or nuclear weapons, are well aware that failure to resolve the waste issue will further their cause, and this point of view will need to be recognised and understood. It is probably nalve, therefore, to think that waste management can be decoupled from consideration of future nuclear policies. In this context, it may be of interest to note that a recent survey of public opinion in Europe, the Eurobarometer survey of energy issues, showed that in respect of the features of energy sources 72% considered protection of the environment to be top priority. And in regard to which sources would be best for the environment 67% thought that the new renewable sources would be best, and 3% thought that nuclear fission would be best. Equally, neighbouring countries may have complex points of view that go beyond simply ensuring protection of their populations to within internationally accepted radiological standards. These may relate quite legitimately to the question of “spatial equity”. Why should non-nuclear, neighbouring countries be affected in any way by radioactivity from nuclear programmes from which they derive no benefit, regardless of the level of radiological impact? This is a very important political question and must be addressed in the context of the overall decision-making process. For most practical purposes it may be assumed that politicians will reflect the views of the voting public and the need to maintain cordial relations with neighbouring countries, but their commitment to the process and to implementing the outcome must be perceived as being whole-hearted and sincere.
CONCLUSION With some notable exceptions where good progress has already been made, European countries are only now learning how to design and implement processes for achieving consensus on how to deal with the nuclear legacy. The real challenge for the future would appear to be one of framing the debate between stakeholders with a wide variety of points of view and motives and managing it to a successful outcome.
HAZARDOUS WASTE MANAGEMENT IN SOUTHEAST ASIA
DR. BALAMURUGAN GURUSAMY The Institution of Engineers Malaysia, Petaling Jaya, Malaysia
INTRODUCTION One of the greatest environmental challenges facing countries in Southeast Asia (SEA) is the problem of burgeoning hazardous wastes brought about by the fast pace of industrial expansion. Over the past two decades, the main target of industrial output has changed from domestic consumption to export markets. While an industry-driven economy creates higher income opportunities for some people, it also has an undeniable impact on the region’s environment and its natural resources. Industrialization has introduced into the region, as it has elsewhere, the use of hazardous substances as raw materials and the production of hazardous wastes. Hazardous wastes in many SEA countries are accumulating at a frightening pace and many of those wastes are compounds that biospheric systems cannot absorb and recycle. This paper summarizes the state of affairs regarding hazardous waste in SEA. It describes the background to the circumstances underlying the waste management problems, what has been done in terms of control and management and suggests what can be done to accomplish the goals of sustainable development. SOUTHEAST ASIA Southeast Asia (SEA) consists of ten counties, namely Brunei, Cambodia, Indonesia, Laos, Malaysia, Myanmar, the Philippines, Singapore, Thailand and Vietnam. The combined population of the SEA counties was over 522 million in year 2000 (Table 1) and is projected to reach 800 million in the year 2050 [l]. The 10 countries have a diversity of cultures, geographies, economies and lifestyles. Industrialization has, over the past few years, helped to alleviate poverty and improve the quality of life in the region. The engineering of water supply networks, dams, roads, houses and townships and the expansion of the manufacturing sector have all increased the level of comfort and economic affluence in most countries. Most SEA countries have, without doubt, benefited immensely from the industrialization process which has transformed sluggish rural economies into bustling ones. Table 1 :Profile of Southeast Asian Countries 1 Landarea(km2) 1 Population (Gillion) Brunei 5765 0.34 181035 11 Cambodia 1919317 212 Indonesia 236800 5.3 Laos 329758 23.3 Malaysia 676575 49 Myanmar Philippines 300000 78.4 448 4 Singapore 513115 62.3 Thailand 77.5 Vietnam 331700
1 Country
140
I I
Per capita G D P ~1 ~ 2001 (USD) 12245 270 69 1 330 3696 151 914 20659 1831 416
I
141
At the same time, industrial programs and projects have also brought about social and environmental problems. The environmental situation throughout SEA, and Asia in general, is deteriorating. We are witnessing an accelerating loss of biodiversity, land degradation, floods and water shortages. Many rivers in the SEA region are polluted. In Malaysia, for example, of the 120 rivers basin monitored by the Department of Environment in 2000, over 55% were categorized as polluted or slightly polluted [2]. The marine water quality in Malaysia has also suffered - more than half the 993 samples collected in the year 2000 exceed the national standards in terms of total suspended solids, oil and grease and E Coli [2].
Table 2 :Environmental Trends in the Asia Pacific Region 1990 - 1995
environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia ollution environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia environment south asia south eas asia asia Source : modified from [ 11 Management of wastes, particularly hazardous wastes, is among the greatest environmental challenges in SEA. The lack of resources and technology, limited know-how, insufficient regulation and enforcement have all contributed to the problem. HAZARDOUS WASTE SITUATION IN SOUTH EAST ASIA It is estimated that SEA countries currently generate about 5 millions tomes of hazardous wastes per year with Thailand and Indonesia leading the production. Data on waste generation is generally inaccurate and most governments are hard pressed to keep track of all industries and the types and quantities of wastes that they generate. In some countries, it is thought that for every one hazardous waste generator registered with the Government, there could be ten others not registered - thus the official wastes numbers and quantities could be gross underestimates. Countries like Singapore and Malaysia have made great strides in containing the hazardous wastes problem. Singapore has the most comprehensive hazardous waste management program in the region while Malaysia has achieved considerable progress. Despite the progress, incidences of illegal dumping are common. Countries such as Cambodia, Laos and Myanmar generate little hazardous waste as the pace of industrialization is relatively slow. There are no hazardous waste management programs in these
142
countries and the main environmental challenges are the management of municipal solid waste, clean water and controlling the loss of biodiversity. In Indonesia, according to preliminary estimates by the BAF'EDAL (Environmental Impact Management Agency), the total amount of dangerous, hazardous and toxic wastes was approximately 450,000 tonnes in 1990 and is expected to reach 1 million tonnes by year 2000. Thailand is estimated to have generated some 2.8 millions tonnes of hazardous waste in 2000. Malaysia generated some 400,000 tonnes of hazardous wastes in 2000. It was estimated that about 280,000 tonnes of hazardous wastes were generated in the Philippines in 2000 [ 5 ] . The main sources of hazardous wastes are from the industrial sector. Industries such as electroplating, textile, tanning, chemical and metal processing often generate the most hazardous wastes. In Malaysia, for example, 33% of the hazardous wastes were from the metal industry, 24% from the chemical industry, 10% from electronics industry and the rest from a variety of industries. In Vietnam, it is estimated that 89% of all hazardous waste comes from the industrial sector and rest from hospitals and other sources. Most medium and small-scale industries generally do not treat hazardous wastes but instead discharge wastes directly into the drainage systems and nearby water bodies, discard waste together with municipal solid waste or simply bury hazardous wastes on site. Even wastes from hospitals are collected along with municipal solid wastes and transported to dumpsites where they are buried [3]. In Myanmar, the main sources of hazardous wastes include dyeing, printing and finishing processes of the textile and photoengraving industries. With the construction of waste treatment facilities in some countries and the introduction of tighter regulations in others, there is now a more concerted effort by the industries to minimize and recover wastes. In Malaysia, for example, since 1998 when the central facility for hazardous waste treatment became operational, industries found themselves with no more excuses not to send their wastes. This actually resulted in many industries beginning waste minimization and waste recovery and the number of waste recovery facilities has increased significantly. LEGISLATION Most SEA countries have, in the last two decades, promulgated legislation to control and manage hazardous wastes (Table 3). The major goal of the legislation is to enable the government to control and manage the generation, transport, reuse, recovery and disposal of hazardous wastes.
143
Table 3.: Hazardous Waste Regulations in SEA Countries Country Malaysia
Basic legislation
Environmental Quality Act 1974 Environmental Quality (Scheduled Wastes) Regulations 1989 Vietnam Law on Environmental Protection/ Decree 175 Regulation on Hazardous Waste Management 1999 Philippines Toxic Substances and Hazardous and Nuclear Wastes Control Act of 1990 Thailand
Indonesia-
Singapore
Responsible Authority Department of Environment Ministry of Natural Resources and Environment
Department of Environment and Natural Resources Enhancement and Conservation of the Ministry of Natural National Environmental Quality Act, 1992 Resources and Hazardous Substances Act 1992 Environment Treatment of Waste or Disused Substances 1997 Environmental Management Act 1997 Office of the State Government Regulations Concerning Minister for the Hazardous and Toxic Waste Management, Environment. 1994 Environmental Public Health (Toxic National Environment Agency Industrial Wastes) Regulations
The Vietnamese Government, for example, promulgated Regulation on Hazardous Waste Management in 1999, specifying treatment and disposal methods for hazardous wastes. The Regulation includes a definition of hazardous waste, responsibilities of relevant ministries and agencies, responsibilities of its generator, a certification system for entities hauling, treating and disposing of it, a manifest system under which to haul it, and emergency measures. It specifies detailed classifications of hazardous wastes, treatment standards, and treatment and disposal methods for waste in each classification. In Vietnam, these wastes are defined on the basis of the concentration of a hazardous component in waste, the place where waste is generated (e.g., metal pickling facilities), or the property specific to waste (e.g., explosive substances). In Thailand, the Hazardous Substances Act 1992 stipulates that hazardous waste must be stored in sealed and safe containers, and must be strictly separated from other types of waste. It is prescribed that the actual requirements regarding treatment methods and treatment standards for hazardous waste are to be notified under the Act by the Ministry of Industry. In 1997, however, new hazardous waste regulations were issued as Notification of the Ministry of Industry No. 6 under the provisions of the 1992 Factory Act. Current hazardous waste regulations are therefore based on this Notification of the MOI No. 6, 1997. The new notification does not introduce any major changes to the categories of hazardous waste, but it substantially increases the range of substances subject to regulation. Entitled “Treatment of Waste or Disused Substances,” Notification of the MOI No. 6, 1997 first of all prohibits any factory owner who possesses solid waste or unusable materials, in the form and with the characteristics described in the
144
notification, from moving that waste out of the factory site except for the purpose of detoxification, treatment, disposal, or landfill in the prescribed manner. A detailed list of substances and treatment methods are laid down. Under these provisions, factory owners are obliged either to treat hazardous waste themselves, following the methods prescribed in the notification, or to contract out the treatment in compliance with the regulations. Notification No. 6, 1997 also sets out the particulars of hazardous waste treatments, and the standard forms of the required reports. In total, nearly 1,000 different substances are classified as hazardous waste. Legislation regarding scheduled wastes in Malaysia is basically set forth in three regulations and orders: Environmental Quality (Scheduled Wastes) Regulations 1989, Environmental Quality (Scheduled Wastes Treatment and Disposal Facilities) Order 1989, and Environmental Quality (Scheduled Wastes Treatment and Disposal Facilities) Regulations 1989. The term “scheduled wastes”, as used in Malaysia, refers to categories of wastes ranging from hazardous wastes to toxic substances. There are currently 107 categories of industrial wastes listed as scheduled wastes, including 28 types defined by their structure and composition rather than by their source, and 30 types that can be identified by source, such as sludge generated by wastewater treatment. The regulations on scheduled wastes do not prescribe any permissible limits in terms of discharge volume or concentration of contaminants. This means that even if a factory generates only a very slight amount of scheduled waste, final disposal in accordance with the laws and regulations is still required. The regulations stipulate that scheduled wastes can only be finally disposed of at “prescribed premises” approved by the Director General of the DOE, and the waste generator is required to store the waste if no prescribed premise exists. Indonesia, as a response to its ratification of the Basel Convention, enacted the Regulation Concerning Hazardous and Toxic Waste Management (No. 19, 1994). This marked the first implementation of regulations on hazardous and toxic waste in Indonesia. Together with this, five Decrees of Head of BAPEDAL (Decree of Head of Environmental Management Impact Agency, No. 1 to 5 , 1995) were prepared showing the details for the storage, collection, treatment and disposal procedures. The Regulation prescribes the duty of management of companies which discharge hazardous and toxic waste, the procedures for collection, storage, transport and treatment of hazardous and toxic waste, and the disciplinary measures for violators. It also provides details of specific substances that come under the term of hazardous and toxic substances. For transporting hazardous and toxic waste from discharging companies to treatment companies, a Hazardous and Toxic Waste Manifest must be prepared in a given format. Furthermore, companies that treat hazardous and toxic waste must set up treatment facilities that satisfy given conditions, and implement environmental impact assessment and environmental monitoring. The import of hazardous and toxic waste is prohibited. If exporting such waste, approval is required from both the Indonesian government and the government of the receiving country. Singapore probably has the most comprehensive hazardous waste management program in the region. The collection, recycling, treatment and disposal of hazardous wastes are controlled under the Environmental Public Health Act and the Environmental Public Health (Toxic Industrial Wastes) regulations (TIWR). Industrial wastes controlled under the TIWR are listed in the Schedule of the Regulations as waste streams from specific industrial activities, wastes with specified toxic components and as specific categories of wastes. Singapore acceded to the Basel Convention in 1996 and in 1998, enacted the “The Hazardous Waste (Control of Export, Import and Transit Act” to strengthen the control on export, import and transit
145
of hazardous wastes in accordance to the principles and provisions of the Base1 Convention.
HAZARDOUS WASTE PROBLEMS The main problems pertaining to the safe treatment and disposal of hazardous wastes in SEA can be categorized as follows : The lack of adequate treatmentldisposal facilities The high costs of treatment Inadequate enforcement capacity Lack of incentives to recover/rninimizewastes The lack of adequate treatment/disuosal facilities Most SEA countries have limited or no facilities to treat and dispose of hazardous wastes in a safe and environment-friendly manner. Only Singapore, Malaysia, Thailand and Indonesia have centralized facilities to treat hazardous wastes. Other countries have planned for these facilities but due to lack of funds, these facilities have not yet been built. In Malaysia, the Environmental Quality (Scheduled Wastes) Regulations 1989 prescribes that hazardous wastes can only be finally disposed of at prescribed facilities. However, until 1997, or for around a decade after the regulations came into force, no prescribed final disposal facilities existed in Malaysia. Throughout this time, most industries were forced to store scheduled wastes on-site, and the majority of companies were faced with ever-growing stacks of wastes. The final disposal plant run by Kualiti Alam, a private company, became partially operational at the end of 1997 and started full operation in June 1998. The plant is relatively modem with facilities for physical and chemical treatment, incineration and landfilling. The main problem now is transportation because wastes from all over the country have to be brought here, involving distances up to 500 km since it is the only plant in the country. Thailand, at present, has five central facilities that can properly treat hazardous waste. The first two were constructed by the Ministry of Industry and are operated and managed by GENCO, a joint public-private sector company with partial equity investment from the MOI. One of these facilities is the Bang Khun Thian Hazardous Waste Treatment Plant, located in the southwest of Bangkok. It began operating in 1988 and has a processing capacity of 1,000 cubic meters per day of wastewater containing hazardous substances from textile and electroplating factories, plus 50 tons of solid hazardous waste per day. The other treatment facility, located in Map Ta Phut Industrial Estate in Rayong Province, began operation in 1997. This facility has stabilizing equipment, equipment for converting waste into fuel, and a landfill. It has the capacity to treat 70,000 tons of hazardous waste annually. All these facilities are still inadequate to cope with the approximately 1.6 million tons of hazardous waste generated in Thailand every year. To solve the problem, the government has proposed that several more facilities be built nationwide but all these projects were met with opposition from people living near the proposed sites, and some of the projects have already been shelved [4]. Indonesia, a country of 220 million people, has only one central hazardous waste treatment facility located in Bogor, central Java and built in 1994. The plant processes most of the hazardous wastes from the industrialized West Java and the main sources of wastes are from the chemical, textile and metal-finishing industries.
146 Facilities include waste stabilization and a secure landfill. One treatment facility is grossly inadequate even for the island of Java. Wastes from all other islands in Indonesia remain stored on-site or illegally disposed. Although the law in Vietnam requires hazardous wastes to be treated prior to disposal, there is neither a treatment facility nor a final disposal site for hazardous wastes. A site has already been secured for a waste incinerator and hazardous waste disposal site within the premises of the municipal landfill site at Nam Son, 50km north from the city center of Hanoi. The construction has not started, as fimding is not yet available. In Vietnam, hazardous wastes can be disposed of through a waste disposal contractor for a fee. However, these wastes seem to be dumped at a landfill disposal site together with general wastes. In order to prevent these wastes from causing any problem in the future, some companies store hazardous wastes within their own premises. They intend to store these wastes that way until the Vietnamese Government provides appropriate systems of legislation and treatment facilities. In the Philippines, the Government is looking for private investors to build the country’s first integrated hazardous waste treatment facility. However, the perceived poor compliance to the existing legislation is holding back potential investors. A quote from the Trade Union Congress states “how would an investor put his money into these facilities knowing that he has to compete with rivers, creeks, ravines and secret dumpsites as disposal sites for hazardous wastes”. High cost of treatment The high cost of treatment is seen as a major barrier towards proper disposal of wastes in almost all SEA countries. Long accustomed to simply dumping their wastes alongside with municipal wastes, most industries are reluctant to pay to have their industrial waste treated. In Malaysia, Kualiti Alam is the only company carrying out integrated treatment of scheduled wastes. In 1995 the government awarded the company a 15-year exclusive right to conduct the scheduled wastes final disposal operation in Malaysia. Since it started operations and until today, there have been grouses amongst the industry of the high treatment charges - often accusing the Kualiti Alam of using its monopoly to charge high rates. The rates that Kualiti Alam charges range from US$ 100/tonne for landfilling to US$ 750honne for incineration. These rates are higher than those charged even in some developed countries. In Indonesia, the charges for the hazardous waste treatment are US$ 150 per tonne for landfilling and US$ 200 for stabilization prior to landfilling - these rates are deemed too expensive by most Indonesian companies. Due to these perceived high costs of sending their wastes to the central waste treatment facility, industries would rather keep on storing their wastes on site or seek to dispose them illegally. Inadeauate enforcement capacity Inadequate enforcement of regulations remains a major barrier to proper disposal of hazardous wastes in most SEA countries. The relatively youthful environmental consciousness in this region means that the government agencies dedicated to environmental protection and management are also new. Consequently, these agencies face challenges in asserting their competence and enforcing legislation and are very often inadequately funded. Almost all the enforcement agencies are understaffed. For example, the Environmental Management Bureau in Philippines has about 800 personnel compared to the country’s population of close to 80 million. In a study carried out by JICA [5], it was noted that since the passage of the hazardous
147
waste regulations in 1990 until 1999, the hazardous waste management section within the bureau received no budget allocation. In this respect Malaysia fares slightly better. For a country of 23 million people, the Department of Environment has over 600 personnel and recently the Government has approved an additional 600 posts for the department. Even then, 1200 personnel for the entire country are too little and the department is hard pressed to visit each factory even once in a year. With an increase in penalties for hazardous wastes offences (fines up to US$ 120,000 are possible), there has been an overall improvement in the compliance level amongst the industries. Similarly, in Vietnam, the environmental enforcement is weak. The Ministry of Science, Technology and the Environment has limited capacity to enforce the law and, even when the law is enforced, the penalties are too low and inadequate as a deterrent, which results in very few companies complying with the law. For instance, under the Regulation on Fines for Violation of Standards on Transportation and Treatment of Waste and Wastewater, establishments that fail to treat waste or wastewater can be fined between $9 - $45 per incident, an amount deemed too low to be of consequence [6]. On the whole, enforcement capacity is poor in all SEA countries (Singapore is a probable exception). The enforcement personnel are often stretched thin, with small teams having to cover the entire country. While some countries have in recent years significantly increased penalties for violation, the penalties in most other countries remain puny and do not act as deterrent since it is cheaper to pay the fine than to treat the wastes. Lack of incentives to recover / minimize waste There are a number of barriers that hinder the recovery and minimization of wastes in SEA. Industry and government have typically addressed these and other industrial pollution concerns with the “end-of-pipe” approach, which involves the construction of waste treatment facilities. However, this is an expensive operation that does not completely eliminate the waste. Furthermore, the generation of waste implies a loss of resources, therefore a loss of production opportunity and profitability. A significant barrier is the cost of purchasing, maintaining and operating waste minimization equipment. Other economic barriers include the lack of a market for recycled or reusable material and the lack of pollution control regulations and their enforcement. Lack of awareness about waste recovery potential and technologies also hinder active waste recovery and minimization. There are two types of physical barrier to the implementation of waste recovery and minimization. The first one is the problem of having insufficient quantities of waste to justify internal use or external collection. This barrier is particularly significant for small industrial firms that generate low volumes of wastes - firms that dominate the industrial scene in SEA. Another physical barrier can arise from the lack of sufficient storage space to accumulate wastes for collection. Again, this tends to be a more significant problem for small firms. In recent years, there has been a steady rise in the number of privately run,offsite waste recovery facilities in SEA. Given the high costs of waste treatment and increasingly stringent regulations, more and more waste generators are turning to offsite waste recovery facilities as a better means to solve their waste problems and in many cases actually derive some additional income by selling their wastes.
148
THE WAY FORWARD Many of the problems regarding hazardous waste management in SEA are similar to those experienced in many other parts of the world, especially in developing economies. While countries like Singapore have achieved a very high standard of hazardous waste management, the others are still struggling, albeit at various degrees. Malaysia has done relatively well - having stringent regulations, and good treatment and waste recovery facilities. Thailand, Indonesia, Philippines and Vietnam have taken steps in the right direction by enacting regulations and initiating the establishment of integrated waste treatment facilities. Laos, Cambodia and Myanmar have more fundamental challenges of providing food and clean water to their population and hazardous waste management is not likely to be high on the countries’ agenda. As Southeast Asia strive towards achieving sustainable development, initiatives that could be taken with regards to hazardous waste management include: Capacity building. The technical capability and level of skills of professional and semi-professionals involved in the waste management sector must be enhanced considerably. Training is needed in various aspects such as waste treatment technologies, waste minimization, cleaner production, health and safety issues, and enforcement and prosecution procedures. Sharing of experience amongst countries in the region is also vital. The institutional and legal frameworks for the management of hazardous waste need to be strengthened. Regulations need to be made more robust and better coordination amongst regulatory agencies is needed to optimize the scarce resources. 0 There is a need for more innovative economic instruments to cajole industries to minimize, recover and treat their hazardous wastes adequately. Regulations alone do not seem to be doing the job very well. SEA countries need to provide greater support for the growth of waste recovery facilities (as opposed to waste treatment) as these facilities utilize waste materials and reduce the overall costs of the country’s waste management. Data collection must be improved. One very important resource in a hazardous waste management program is information - about who is generating wastes, what quantities and types are being generated, and where it is going. Again, sharing of experience and data amongst the countries in the region is important. REFERENCES
[ 13 United Nations Environment Program. Second ASEAN State ofthe Environment Report 2000. Bangkok. 2001 [ 2 ] Department of Environment Malaysia. Malaysia Environmental Qualig Report 2000, Kuala Lumpur, 2001. [3] IUCN. (1998) Environmental Management Issues and Concerns in Vietnam: An Appraisal. The World Conservation Union, Hanoi.
149
[4] Eamsakulrat, P., Patmasiriwat, D. & Huidobro, P. (1994) Hazardous waste management in Thailand, TDRI Quarterly Review, 9(3), 7-14.
[ 5 ] JICA (2002) The study of hazardous waste management in the Philippines. Japan International Cooperation Agency.
[6] Vui, P. (1998) Prevention of Environmental Pollution in Industrial Activities. National Environment Agency; Hanoi.
RESPONDING TO FERMI’S WARNING: JAPANESE APPROACH TO DEALING WITH RADIOACTIVE WASTE PROBLEMS TOM10 KAWATA Chief Senior Scientist, Japan Nuclear Cycle Development Institute (JNC) Ibaraki, Japan ABSTRACT
In spite of the early warning given by Enrico Fermi, the problems,of growing accumulation of radioactivity produced by the utilization of nuclear energy have remained unsolved. This paper briefly describes the historical background and the present status of Japan’s nuclear energy program. After providing a bird’s eye view of global-scale radioactivity problems, the paper describes the Japanese approach to taming radioactive waste problems including that of high-level radioactive waste. FERMI’S WARNING
In one of the meetings of the New Piles Committee in the Manhattan project years, Enrico Fermi was speculating about the future of nuclear energy, and expressed his concerns as follows’: “It is not clear that thepublic will accept an energy source thatproduces this much radioactivity and that can be subject to diversion of material for bombs.” Sixty years later, nuclear energy is producing some 16 % of the world’s electricity supply, and yet we are not quite successful in giving final solutions to these two fundamental warnings given by Fermi. Recent withdrawal of North Korea from the Non-Proliferation Treaty (NPT) vividly indicated that Fermi’s first concern was still hard to resolve. Solving the radioactive waste problems, especially that of high-level radioactive waste, is a prerequisite for continued utilization of nuclear energy, but so far, none of the countries operating nuclear power plants have succeeded in completely settling these problems despite all their efforts over many decades. In addition, some countries that committed to the development of nuclear arms are now facing a very heavy load of dealing with the resultant large volume of waste and considerable environmental contamination. NAKASONE SET ON THE START OF JAPAN’S NUCLEAR ENERGY PROGRAM WITH A BUDGET OF 235 MILLION YENS For the Japanese, the first encounter with ‘atomic energy’ was, without doubt, the two atomic bombs dropped over Hiroshima and Nagasaki in August 1945. It was natural that these two overwhelming events caused an “atomic trauma” for post-war Japanese, and all matters related to “atom” except in pure physics became a kind of taboo even in academia. Eisenhower’s “Atoms for Peace” speech in December 1953 followed by the first Geneva Conference in AugustBeptember 1955 ignited the boost of worldwide euphoria for nuclear energy. Even though preliminary discussion on the necessity of nuclear energy research had begun to emerge in a small scholar’s group of the Science Council of Japan (SCJ), many were still hesitant or cautious to be involved in the matter whose independency from military application was thought to be so difficult to achieve. It was a group of young politicians that triggered the actual start of the nuclear energy program in Japan. A 35 year-old congressman, Hiroyasu Nakasone, a future prime minister in the mid 198Os, proposed all of a sudden a starter budget for nuclear energy development in the amount of 235 million yen to the Diet in March 1954. The number 235 coincided with the mass number of a fissionable
150
151
uranium isotope and was good to symbolize the inauguration of Japan’s nuclear energy program. Soon, Nakasone was paid a visit by the eminent professor Kaya, chairman of the SCJ at that time, who wished to protest. After an hour of discussion, he managed to persuade Kaya to agree with him, still with certain reluctance, on the necessity for the nation to commit to the nuclear energy program as soon as possible. It was in such a climate that the Atomic Energy Basic Law was enacted in December 1955. Article 2 of the Basic Law prescribes Japan’s fundamental policy as follows: “The research, development and utilization of atomic energy shall be limited to peaceful purposes, aimed at ensuring safety and performed independently under democratic management, the results therefrom shall be made public to contribute to international cooperation ”, With this legal renunciation of nuclear armament, the Japanese nuclear energy program has been and continues to be strictly limited to civil applications. JAPANESE NUCLEAR ENERGY PROGRAM TODAY Lacking any meaningful indigenous uranium resources, Japan has pursued a so-called closed fuel cycle policy from the early days, and has committed to every stage of the entire cycle at either the developmental or the industrial level. At present, 52 nuclear power plants (NPPs) are in operation throughout the country and produce more than one third of nation’s electricity. Uranium to feed these NPPs is purchased from foreign vendors, and enriched by both foreign and domestic enrichment service companies. Three domestic companies undertake fuel fabrication. From the operation of 52 NPPs, slightly more than 900 tons of spent fuel arise annually. Up to now, about 1,000 tons have been reprocessed in the pilot-scale Tokai Reprocessing Plant (TRP) since the commencement of operations in 1977. More than 7,000 tons have been transported to and are being reprocessed at La Hague in France and Sellafield in the UK. Japan Nuclear Fuel Limited (JNFL) is constructing the Rokkasho Reprocessing Plant (RRP) with a design throughput of 800 t-U/y. The construction is approaching its final stage aimed at commissioning in July 2005, and the cold chemical check-out test is now in progress. High-level liquid waste generated by reprocessing is being solidified into stable glass form, sealed in stainless steel canisters, and stored until final disposal. More than 600 canisters shipped back from European reprocessors are now stored in the facility built adjacent to the RRP. About 130 domestically produced canisters are being stored at the facility attached to the TRP. The Japan Nuclear Cycle Development Institute (JNC) is continuing research and development on a fast breeder reactor and associated fuel cycle system aiming at ultimate improvement of uranium utilization efficiency and to minimize proliferation risks and the waste burden. The Japan Atomic Energy Research Institute (JAERI) is leading, among other various basic researches, a fusion technology development program. Because of the wide scope of Japan’s nuclear activities, a broad spectrum of radioactive waste has been arising in spite of the absence of military-related activities. So far, there is no notable environmental deterioration caused by man-made radioactivity in Japan, partly because of the very cautious approach to radioactive waste management, and partly because of the somewhat delayed start of ground disposal of low-level radioactive waste which eventuated well with the avoidance of disposal under premature safety standards.
152
GLOBAL PICTURE OF MAN-MADE RADIOACTIVITY PROBLEMS 'Tfyou know the enemy and know yourself; your victory will not stand in doubt, i f you know Heaven and know Earth, you may make your victory complete9! -Sun Tzu, The art of war Before describing the Japanese approach to dealing with radioactive waste problems, it would be worthwhile to look at a global picture of man-made radioactivity problems. Concerns on the global risks of artificial radioactivity from continued utilization of nuclear energy will be categorized in the following two fundamental questions: 1. What will be the effect of global dispersion of long-lived radionuclides released from continued operation of NPPs and fuel cycle facilities? 2. How can we deal with the large and growing inventory of radioactive waste? In regard to the first question, the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) estimated the maximum annual radiation dose &om the global dispersion of major long-lived radionuclides under the assum tion that the current level of nuclear energy utilization will continue far into the future! A major fraction of radionuclide release into the environment is attributed to the operation of reprocessing plants. Table 1 shows the estimated doses compared with the effects from other sources. This comparison suggests that, so long as proper safety measures at individual facilities or sites are maintained, the world average exposure due to continued release of long-lived radionuclides from nuclear energy utilization is negligibly small. In contrast, the world average of annual exposure from medical examination and treatment is now as high as 0.4 mSv. Regarding the second question, the comparison shown in Table 2 will help us to picture this problem from a global-scale view. In Table 2, the total inventory of radioactivity in the waste produced by nuclear power generation in the world is compared with the total amount of natural radioactivity contained in the continental portion of the earth's crust3. In 2001, the world electricity generation was 2544 TWh, or 290 GWa. If we assume 8.8 x 10l2 Bq/GWa as a production rate of low-level radioactive waste (LLW) in nuclear power reactors, the global production of LLW in 2001 amounts to 2.6 x l O I 5 Bq, which exactly matches the amount of natural radioactivity contained in a 1 km3 block of soil.
Table 1. Population exposure from worldwide releases of artificial radioactivity. Annual dose, mSv
Sources Maximum estimate of radiation dose due to continued utilization
I
14
C (half life: 5,739 y) *'Kr (10.8 y) 3 H (12.3 y) + 129 I ( 1 . 6 ~ 1 y) 0~ of nuclear energy Atmospheric nuclear explosion Peak radiation dose (1963) Radiation dose in 2000 tests Chemobyl accident Peak radiation dose (1986) Radiation dose in 2000 World average radiation dose from natural background
0.0001 0.000005 0.15 0.005 0.4 0.002 2.4
I
153 about 230,000 tons of spent nuclear fuel (SNF). If reprocessed, the activity of fission products and residual actinides corresponding to this amount of spent fuel is about lo2' Bq after 30 years cooling, and will decrease to 1.3 x 10l8 Bq after 1000 years. One can see in Table 2 that the activity after 30 years cooling is in the order of 0.01 % of natural radioactivity present in the continental crust, and this fraction will decrease by three orders of magnitude in 1000 years. It is widely accepted that the disposal of high-level radioactive waste (HLW), either in the form of vitrified waste after reprocessing or in the form of spent fuel itself, would be best achieved by the burial in deep underground repositories. In most cases, it is anticipated that the radionuclides in HLW would not begin to migrate into biosphere until after 1000 years owing to the protection by the so-called multi-barrier system. Assuming that an area of 100 m2 is necessary for the disposal of HLW corresponding to 1 ton of spent fuel, the area required for the disposal of whole HLW accumulated by 2000 is calculated to be 23 km'. From Table 2, the natural activity contained in 1 km thick soil layer with 23 km2 area is easily calculated to be 6 x 10l6Bq. The activity of 1.3 x 10" Bq for the whole HLW after 1000 years is, therefore, about 20 times larger than the natural radioactivity contained in a soil block that covers the repository area. As a crude comparison, one can recall that, in such places as Kerala in India, Guarapari in Brazil and Ramsar in Iran, the natural background radiation is significantly higher than the world average, typically by a factor of 10 to more than 100, and yet people live for many generations with no deteriorated health or genetic effects4. The increase of the radioactivity inventory by a factor of 20 in the deep underground chambers of a HLW repository area would by no means lead to any noticeable increase of background radiation in the vicinity. Table 2: Activity of natural radionuclides in the continental crust and total activity of wastes from nuclear power generation in the world (Bq).
I K-40
Content in continental crust (1.7 x l O I 9 tons) 7.3 block of soil 1 !a3 LLW from NPPs in 2001
HLW accumulated by the end of 2000
8.4
Th-232 chain
U-238 chain
Total
7.8
loz3
5.7
8.6 x
1 0 ' ~ 8.1
1014
9.2
1014
2.6
10'~
2.6
1015
1 x 1O2l (after 30 ears) 1.3 x 10' J (after 1000 Years)
JAPAN'S APPROACH TO TAMING RADIOACTIVE WASTE PROBLEMS In Japan, radioactive waste is categorized into two basic types: HLW and LLW. HLW is high-level liquid waste generated from spent fuel reprocessing or its vitrified form. All others are generally regarded as LLW and further categorized into several subcategories primarily based on their origin and nature. At present, shallow-land disposal of LLW is being made only for "reactor waste", which is generated from the normal operation and maintenance of NPPs. The aerial view of the Rokkasho LLW Disposal Center is shown in Figure 1. The reactor waste is generally composed of activated materials and most of the radioisotopes involved tend to decay in relatively
154
short periods of time. Therefore, an institutional control is applied for this type of disposal facilities with three decreasing control levels and times which extend over 300 to 400 years. Beyond this control period, the population dose is assured to be lower than 10 pSv/y in my conceivable events, and the site is allowed to be released without any restrictions.
environment south asia south environment south asia south environment south asia south
A similar concept has been applied for the disposal of LLW with relatively higher activity that is expected to be produced in the course of decommissioning of retired NPPs. In this case, however, the burial pit or silo is designated to be placed underground deeper than 50 m from the surface so that the possibility of the intrusion based on usual human practice is practically eliminated. For other types of LLW such as transuranic low-level waste, efforts to establish regulations and standards for disposal are still under way. A SOCIETAL EXPERIMENT A VOLUNTARY AND STEPWISE APPROACH FOR SITING HLW REPOSITORY Gandalf: For even the very wise cannot see all ends” - J. R. R. Tolkien, The Lord of the Rings -
“......
Solving the problems on radioactive waste, especially on HLW, is not merely a technical or industrial task but also a very societal and political task which has to deal with wide spread public fear of radioactivity and uncertainties associated with the time range that extends well beyond our life span or even beyond the span of recorded human history. Following scientific research on geological disposal of HLW lead by JNC for over a quarter of century, the Specified Radioactive Waste Final Disposal Act was legislated in June 2000. This act institutionalized the fundamental scheme of HLW geological disposal together with a designated funding system in Japan. Based on this act, the Nuclear Waste Management Organization in Japan (NUMO) was established as an entity to implement final disposal of HLW. NUMO is responsible for site selection, construction, operation, and closure of the underground repository where some 40,000 vitrified HLW canisters are expected to be disposed of’. The concept of geological disposal in Japan is similar to that in many other countries, being based on a multi-barrier system which combines the natural geological environment with engineered barriers as shown in Figure 2. Considering the complexity of Japan’s geology, an engineered barrier system (EBS) with sufficient margins in its isolation functions to comply with a wide range of geological environments was developed. The major role of the overall barrier function of the disposal system is borne by the near-field, or the EBS and a limited volume of the surrounding host rock, while the remainder of the geosphere serves to reinforce and complement the performance of the EBS. Though Japan is in fact located in a
155
tectonically active zone, historical studies on geological phenomena in the Quaternary revealed that both volcanic activities and active fault movements had occurred repeatedly in distinctly limited regions and there had been little change in these locations for more than a million years. Such studies indicate the existence of geological environments, which are stable enough to host an underground repository, and can provide favorable conditions for EBS performance and for the retardation of radionuclide migration in the surrounding rock during a time period in the order of hundreds of thousand years. Both crystalline and sedimentaryrocks are considered to be candidate rock media for hosting an underground repository. The HLW disposal program is a challenging enterprise both from technical and societal aspects as mentioned earlier, and a three-stage site selection process is employed to ensure that the decision process is flexible and transparent to the public. The first step of this process is the selection of several Preliminary Investigation Areas (PIA) where surface-basedgeological investigations including borehole drillings are to be conducted to evaluate site suitability and to select the areas for detailed investigation in the next step. In order to reach the attention of municipalities and invite them to voluntarily participate in the PIA selectionprocess, NUMO has prepared an educational information package and distributed it to all of over 3,200 municipalities in Japan. As part of the public outreach program, open symposia on HLW disposal program were held by the Government at 11 major cities, and by NUMO at 31 smaller cities located throughout the country during the period of 18 months beginning in mid-2001.
Stable Geological Environment
1
Figure 2 HLW Disposal Concept
LEARNING FROM THE ROSETTA STONE.. .
In parallel with the preliminary site selection effort by NUMO, supporting R&Ds are in progress in JNC and other related organizations to enhance the reliability of repository technology and to improve safety assessment methodology and data base. A project to build two underground laboratories, one in a crystalline rock formation and the other in a sedimentary one, is also being undertaken by JNC. On the regulatory side, the Nuclear Safety Commission published their first report in July 2000, “The Basis for Safety Standards of HLW Disposal”, which defined the fundamental process and preliminary guidelines for staged licensing in the course from repository construction to its final closure. Discussions on such issues as retrievabilityheversibility and the needs for institutional control are continuing.
156 Geological disposal is based on the passive safety and its long-term safety should not rely on active management. It is generally anticipated that some type of institutional control, which may include such active management as monitoring, will be applied up to the time of the repository closure. There is a growing opinion that some kind of post-closure institutional control (PCIC) may be necessary, or can be of value, in order to enhance societal confidence in long-term safety and to minimize the possibility of unintentional intrusion by future generations to the repository. It is logical to assume that the PCIC has to be passive. Such measures as the restriction of land use by laws or regulations, the preservation of relevant records and information, and the placing of markers or monuments at or in the vicinity of the site of a closed repository are considered as candidate measures for the PCIC. The first two measures cannot be guaranteed to survive if a country collapses as a result of either gradual decline or a sudden drastic change in the societal system, while the third may have a better chance to survive such societal degradation or catastrophe. When it comes to the markers or monuments, one can find an example of historical success in the Rosetta Stone displayed in the British Museum. The last part of its inscription reads as follows6: “This decree shall be inscribed on a stela of hard stone in sacred and native and Greek characters and set up in each of thefirst, second and third temples beside the image of the ever-living king”. The role of the markers or monuments is obviously to transfer some message to the generations in far future. In that aspect, the Rosetta Stone turned out to be a historical success in that it utilized a stela of hard stone (longevity as a recording media), inscribed in three languages (hieroglyphs, demotic and Greek; language redundancy) and was placed in three separate locations (physical redundancy). Geological disposal of HLW is a human practice with historical time scale, and some of the ancient wisdom will help us to better conduct this practice. “The future of man will continue to be inescapably radioactive. We must approach it wisely and safely for betterment of allpeople”. - C. R. Richmond, 0RNL’-
REFERENCES
1. Weinberg, A. M., The First Nuclear Era, The Lrfe and Emes of Technological Fixer, ALP Press, 1994. 2. UNSCEAR, Sources and Effects of Ionizing Radiation, UNSCEAR 2000 Report to the General Assembly, with Scientific Annexes, United Nations, 2000. 3. Jaworowski, Z., “Ionizing Radiation in the 20” Century and Beyond”, atw 47. Jg. (2002) Heft 1, pp22 - 27, January 2002. 4. Jaworowski, Z., “Radiation risk and ethics”, Physics Today 52 (9), pp24 - 29, 1999. 5 . Masuda, S. and Kawata, T., “The Japanese High-Level Radioactive Waste Disposal Program”, Geological Challenges in Radioactive Waste Isolation, Third Worldwide Review, LBNL-49767, December 2001. 6. Andrews, C., The Rosetta Stone, The British Museum Press, 1981. 7. Richmond, C R., “Population Exposure from the Nuclear Fuel Cycle: Review and Future Direction”, Population Exposurefrom the Nuclear Fuel Cycle, Gordon and Breach Science Publishers, 1988.
THE U.S. APPROACH TO THE SCIENCE AND TECHNOLOGY OF LEGACY WASTE MANAGEMENT STEPHEN J. KOWALL Idaho National Engineering and Environmental Laboratory Idaho Falls, USA ABSTRACT Over 50% of the US. population depends on groundwater aquifers for their drinking water. The subsurface is also where we dispose of almost all our municipal and industrial solid wastes, and where we intend to dispose of residual wastes from our cleanup of nuclear weapons program sites. Stewardship of residual contamination remaining after cleanup of legacy disposal and industrial sites is necessary in nearly every state to guard against potential groundwater pollution. The U.S. Department of Energy (DOE) recently completed two documents designed to coordinate science and technology activities to improve our understanding and management of the subsurface and stewardship of waste sites. In August 2001, DOE published A National Roadmapfor Vadose Zone Science and Technology. This roadmap addresses the knowledge and tools needed to describe and forecast accurately the processes controlling contaminant movement and the consequences of subsurface contaminants on our groundwater. In October of 2002, DOE released for comment a complimentary report, the Draft Long-Term Stewardship Science and Technology Roadmap. This report describes technical and social capabilities necessary for containment and monitoring of contaminants at subsurface disposal and legacy cleanup sites specific to the nuclear weapons program sites and identifies needed enhancements to these capabilities. On October 1,2003 the U.S. DOE will launch the Office of Legacy Management to play a key role in a new program designed to manage, document and monitor the dismantling of the agency’s infrastructure from the Cold War nuclear weapons program. The mission of the office will be to provide for the long-term stewardship of legacy sites and to provide for the improved health and safety of DOE workers. The office will ensure that as the Environmental Management program completes its clean-up mission, the legacy activities will remain a visible priority responsibility of the federal government. The functions of the office include: land management, environmental surveillance and maintenance, record keeping, and maintaining benefits for former workers. THE SCALE OF THE PROBLEM Over 50% of the US. population depends on groundwater aquifers for their drinking water. The subsurface is also where we dispose of almost all our municipal and industrial solid wastes, and, where we intend to dispose of residual wastes from our cleanup of nuclear weapons program sites. Stewardship of residual contamination remaining after cleanup of legacy disposal and industrial sites is necessary in nearly every state to guard against potential groundwater pollution.
157
158
Residual contamination in the subsurface remaining after remediation is potentially available for environmental transport and will need to be carefully monitored over the long term. Monitoring is distinct from characterization, in that it typically begins after a site has been characterized. Long-term monitoring is necessary to confirm engineered barrier system performance and serve as a sentinel for potential failures. The state-ofpractice for contaminant monitoring systems at DOE closure sites has recently been described as being as much as 25 years behind the state-of-the-art. Using today's knowledge of subsurface conditions and flow, complete removal or destruction of contaminants is not always possible. In some cases, quantities of pollutants will remain as part of planned remediation. Conceptual and numerical models, a key tool for making environmental management decisions, reflect our incomplete understanding of subsurface processes. In their 2000 report, the National Research Council reports that past predictions of contaminant flow through the unsaturated subsurface have been grossly in error. The uncertainty associated with conceptual and numerical models used to support remediation decisions has resulted in a lack of trust by the public and excessive conservatism in the design of remediation approaches. A primary cause of the uncertainty in conceptual and numerical models is incomplete understanding of the behavior of materials (contaminants, nutrients, and microorganisms) in the subsurface; where and how the materials move and change over time.
WORKING TO DEFINE A SOLUTION The nation has a critical need to reach a sound and defensible scientific understanding of how contaminants move in the subsurface through multiple geological environments. This understanding is needed to reduce the present uncertainties in predicting contaminant movement, which in turn will reduce or quantify uncertainties in water resource protection, remediation, long-term stewardship decisions, and future sustainable development. The U.S. Department of Energy (DOE) recently completed two documents designed to coordinate science and technology activities to improve our understanding and management of the subsurface and stewardship of waste sites. In August 2001, DOE published A National Roadmap for Vadose Zone Science and Technologv'. This roadmap addresses the knowledge and tools needed to describe and forecast accurately the processes controlling contaminant movement and the consequences of subsurface contaminants on our groundwater. In October of 2002, DOE released for comment a complimentary report, the Draft Long-Term Stewardship Science and Technology Roadmap". This report describes capabilities necessary for containment and monitoring of contaminants at subsurface disposal and legacy cleanup sites specific to the nuclear weapons program sites and identifies needed enhancements to these capabilities. The product of these initiatives will be a dramatic improvement in our fundamental understanding of properties and processes. The new knowledge and understanding will be sufficiently developed, tested, and verified such that issues of scientific uncertainty will exert far less influence on public debates over interventions or on regulatory procedures to implement public policy. The scientific basis for monitoring and remediation will be advanced to provide long-term protection of ground water resources.
159
MEMORANDUM OF UNDERSTANDING Government agencies with responsibility for ensuring compliance with environmental laws and protection of public health and the environment, members of the Environmental Council of the States (ECOS) and the participating federal agencies have developed a Memorandum of Understanding (MOU) to address LTS needs and activities at residual contamination sites. The purpose of the MOU is to provide a common understanding and basis for discussion and coordination between ECOS and relevant federal agencies regarding LTS. Given that there are multiple federal agencies conducting both cleanup and stewardship activities, a coordinated effort is needed to address LTS at these sites. Such a forum provides an opportunity for the parties to discuss LTS issues, policies, procedures, coordination mechanisms and generally applicable tools for LTS sites. This dialogue will help promote a greater level of consistency, effectiveness and public health and environmental protection at contaminated properties associated with federal government activities throughout the country and should help foster a stewardship ethic into remediation and post-remediation activities. The parties to this MOU are: the Environmental Council of the States (ECOS); the U.S. Department of Defense (DOD); the U.S. Department of the Interior (DOI); the U.S. Department of Energy (DOE); and the U.S. Environmental Protection Agency (EPA). THE PATH FORWARD On October 1,2003 the U.S. DOE will launch the Office of Legacy Management to play a key role in a new program designed to manage, document and monitor the dismantling of the agency’s infrastructure from the Cold War nuclear weapons program. The mission of the office will be to provide for the long-term stewardship of legacy sites and to provide for the improved health and safety of DOE workers. The office will ensure that as the Environmental Management program completes its clean-up mission, the legacy activities will remain a visible priority responsibility of the federal government. The functions of the office include: land management, environmental surveillance and maintenance, record keeping, and maintaining benefits for former workers. The U.S. Federal government has a commitment to protect human health and the environment and manage the long-term costs of winning the Cold War. A separate Legacy Management program would allow the Environmental Management program to focus on clean-up at DOE’S sites. This focuses the Department on long-term legacy activities and increases visibility of stewardship functions. The new office provides a direct line to the DOE Secretary of Energy and provides a budget and management separate fiom EM. Long-term steward activities are a visible, priority responsibility. The current DOE Vision on Long-term Stewardship is as follows.
160
Vision: DOE will avoid, delay, or reduce the kequency or impact of harmful exposures to hazardous substances remaining after DOE cleanup projects and other operations are completed. DOE will ensure that design, construction and operation of new facilities avoid creating waste and contamination problems that will require long-term stewardship. DOE will use improved technologies and institutional structures that improve the reliability and reduce the costs of long-term stewardship. The current process in developing science and technology solutions is conceptualized below.
REFERENCES
' A National Roadmap for Vadose Zone Science &Technology, Understanding, Monitoring, and Predicting Contaminant Fate and Transport in the Unsaturated Zone, U.S. Department of Energy, DOEDD-10871, August 2001, http://www.inel.gov/vadosezone/. " Draft Long-Term Stewardship Science and Technology Roadmap, U.S. Department of Energy, DOE/ID10926, August 2002.
5.
THE CULTURAL PLANETARY EMERGENCY: ROLE OF THE MEDIA
This page intentionally left blank
SPIN IN WAR AND PEACE
MICHAEL STUERMER Friedrich-Alexander-Universitat,Erlangen-Niirnberg,Germany When Henry Kissinger, on one of his many trips around the Middle East, crossed the Jordan River, he told his retainers, “There you see what public relations can do for a river”. Henry himself, whether in or out of office, is a master in building his own legend. He h e w the secret of spin long before the word was exported from tennis to politics. In the modem media society anything can become the object of spin, especially matters of war and peace, power and legitimacy. They are too important to speak for themselves. Indeed, it was an ancient Greek philosopher, Heraclitus, who observed that it is not the deeds that make people tremble, but the words about the deeds. The Iraq war, before and after, is no exception. It has highlighted a new state of US. hegemony - and, indeed, imperial overstretch - and the worst crisis in U.S.-European relations for over a generation. In the helter-skelter course of events statecraft and the long term were sacrificed to spin and the short term by all sides. Besides, most players have lost sight of the fact that Asia, from the shores of the Mediterranean to the Taiwan straits will be the 21” Century’s cauldron of crisis. Iraq is only a beginning. More is to come, whether we like it or not. In the modem media society of the west, reaching out via a1 Jazeera, Arabia TV and other mass media to the Greater Middle East, TV creates its own reality, in fact its own moral and political universe, with responsibilities at best unclear, at worst sinister. In and around Iraq, the media war was a sequence of events with their own momentum, their own laws, their own victims - not the least being Mr. Kelly, the BBC-informer who seems to have cracked under pressure and committed suicide near his home in Oxfordshire. The media war was not so much about straightforward military facts - which both sides, whether winning or losing, had good and irrefutable reasons to twist, to hide and to manipulate - but much more about the politics of war and, above all, the centuries-old question of a just war: in fact the battle raged over who would conquer, maintain and, in the end, control the moral high ground. TV of course won the day. Who can escape the suggestive power of pictures of life and death? But it should be noted that TV, by and large, has come to rely on pictures, not on ideas. It is strong on impressions and short on analysis, let alone concepts. It is the “bibliapauperurn” of the modem age. Let me give you one telling episode. A few days into the war, the London Economist, pro-war, came out with a front page showing a U.S. Marine soldier walking from left to right in the sandstorm. Four days later, the same soldier was on the front page of the anti war Der Spiegel, this time walking in the opposite direction. The Economist carried a caption “The fog of war”, alluding to Clausewitz and the usual uncertainties surrounding what is happening on the battlefield. Der Spiegel was more specific: “World power in the sand”, alluding to a German idiom that refers to a hopeless failure. What happened meanwhile was a very different story. The “embeds” with the U.S. forward columns reported time and again, that the U.S. troops were unable to see and move, let alone fight a battle. Saddam was led to believe that he could outflank the American advance. The Medina Division was dispatched, one of three reliable divisions in his portfolio, and practically annihilated from the air, a ready victim of what has been called network-centric warfare. Meanwhile, neither 163
164
Comic Ali back in Baghdad, nor General Tommy Franks in Kuwait, could be expected to tell the truth - as the truth or any distortion of it is invariably part of the conduct of warfare. One side effect of the war was the perfection of neologisms, from “embeds” to “neutralising” to “friendly fire”. The bloody and deadly nature of war, any war, was to be kept out of sight of the general public. The real grisly pictures did not appear on TV before the death of the two Saddam-scions, long after the official end of hostilities, had to be documented to Iraqis of all denominations and beliefs. In the run up to the war the immediate threat of WMD was “jazzed-up’’to use a now familiar phrase - and a link between A1 Qaida and Saddam was alleged. But in reality, neither was very likely. To use Bio- and Chemical weapons against m o u r e d columns dashing forward at high speed would have been useless, indeed folly - as every student of those weapons and their uses must know, including Iraqi generals. Similarly, the link between Saddam, a secular ruler and a butcher of clerics, and the ultra pious followers of A1 Qaida would have been totally unlikely. Add to this the wild story of yellow cake being brought from Niger to Iraq for the production of nuclear weapons - a story discarded long before, not believed in the State Department and not supported by the CIA - and you have a picture that is worrying at best, and dangerous at worst. Because a government’s credibility goes first, and what follows is trust and legitimacy. Next time, when the danger is real and immediate, intel-services and governments will have a hard time convincing the doubters - perhaps until it is too late. Terror and WMD are indeed a hellish mix, which requires some rethinking of holy beliefs and firm assumptions from a past that was very different. So the Iraq war was, in technology and strategy, a war totally different from any Cold War scenario. But it was not yet one of the wars of the future when proliferators not only possess WMD, but are also hell-bent on using them. Moreover, it had nothing to do, except in the White House mindset, with global terror of the 9/11 variety. There are indeed some serious lessons to be leamt from the events before, during and after the Iraq war, and the final verdict of history will not be formulated according to the quantity and quality of WMD but by whether the war and its aftermath brought about a more stable Middle East framework, with beneficial effects on the entire area, including the Israeli-Palestinian negotiating process, or whether war and civil war will ensue, leaving more WMD in more illegitimate hands, and the world’s oil in close proximity to the world’s most uncontrollable crisis. The most serious consequence, however, is the fact that from now on, when governments pronounce a clear and present danger and the need to pre-empt, the general public will be extremely sceptical. The first victim of any war is always the truth. The second, and probably more lasting one, is legitimacy. Britain’s PM, even more then the U.S. president, finds himself in a profound crisis of credibility. Spin can do a lot for public policy - including immersing it in a long-lasting crisis. The UN played many roles in all of this - but certainly not that of a world government. Indeed, the chief weakness of the UN is that it has, through its charter, the supreme legitimising role - but none of the wherewithal. Indeed, the crisis started as a crisis of UN sanctions: 16 resolutions that Iraq should open all its arsenals, and very little pressure to enforce them. The U.S., instead of assuming for itself the role of attorney of world order, was unsure whether to go with the UN, or against. With UNSC resolution 1441 it seemed that a basis for joint action had been found. However it turned out that the serious consequences threatened by 1441 were, after all, less serious for France and Russia, with vast oil and debt interests inside Iraq, than for Britain and the U.S.. The route via human rights and regional stability, an
165 alternative option presented by the UN charter and its contemporary interpretation, was never tried. So the authority of the SC, not very strong anyway, has also to be counted among the victims of the war. The Secretary General is now trying to repair the damage. He points out the fact that, whatever rifts there were in the recent past, the overriding task is now to reconstruct Iraq after Saddam. Most Europeans tend to follow that argument, but not all of them. France and Germany, unwisely, insist on their pound of flesh. What has happened has serious repercussions on world order, on the Western alliance, an U.S. leadership, on Europe’s strategic cohesion - but above all on the fiture management of terrorism and WMD and any combination of the two. Both WMD and intelligence, through their very nature, require anticipation, imagination, conceptual thinking and, if worst comes to worst, pre-emption - the latter with or without cover under Art. 51 of the UN Charter that allows an action in self-defence but requires that the defender consults to the UNSC as soon as possible. Our concepts of international law lie far behind the reality of threats in our time from WMD, the privatisation of war, failing states, terror and the relative ease with which all of the above can be accomplished. The laws and procedures of the UN are largely outdated, and the world organisation lacks the executive power to bring a miscreant to justice. If, at some time, the U.S. should withdraw from the evils of the world behind the security of two oceans, contrary to the views and illusions of many critics of the U.S., the world would not be a better place but a playground for chaos. Trust is the ultimate glue that keeps modem states together, and no less modem alliances. The Iraq war has shattered much of this: governments have jazzed-up intelligence findings, justified pre-emptive war with unlikely assumptions, others have looked for domestic profit irrespective of the cost to international security and trust. What remains is a contradictory picture: on one side the rise of WMD, the failure of non-proliferation, failing states and private wars. On the other side the weakness of the UN, the divisions within the Western alliance, the resentment of the general public and the loss of trust. The frolicking about the end of history is over. The world is still a dangerous place, and even more so than during most of the Cold War. World order, if it were not for the U.S., is nothing but an empty concept, fraught with illusion. WMD are great equalizers, promising invulnerability to the uninitiated and, along with global terror, they are the weapons of choice of the weak against the strong. Iraq is only the beginning - and its outcome is still very much open. The real test, very soon, will be over Iran and North Korea. Mercifully, the more reasonable countries seem to have learnt a few lessons from the past, and above all that the Iraq disaster must not be repeated. War, in the famous words of General de Gaulle, brings things to light that otherwise remain obscured.
This page intentionally left blank
6.
THE CULTURAL PLANETARY EMERGENCY
This page intentionally left blank
CULTURAL INTOLERANCE AHMAD KAMAL Senior Fellow, United Nations Institute of Training and Research New York, USA It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belieJ; it was the epoch of incredulig, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us ... - Charles Dickens Never in the history of the world has mankind had such powerful tools at its disposal, in communications, in medicine, in armaments, in access to knowledge, in everything. The speed of communications has made space and time curve around itself as we move closer and closer towards cheaper travel, instant emails, and awareness of distant events in real time. Advances in medicine have tapped the remotest comers of isolated rain-forests to unfold the mysteries of natural chemicals and brought them to our doorstep in our local pharmacies, while at the same time unravelling the innermost secrets of our being in an effort to clone us into our bionic alter-egos. The destructive capacity of armaments has reached proportions that give us a glimpse into Armageddon and hell. And for the first time in history, the access to information and knowledge has become a universal phenomenon, with no differentiation of race or colour or sex or belief. With such tools at our disposal, this should be about as close to paradise as we can get. Alas, at the same time, never in the history of the world has there been a greater gap between the potential of the available tools, and the actual delivery of results in an ever-shrinking world. The gap between the rich and the poor continues to grow, and has in fact speeded up in the past decade. In a world where much is touted about perfect markets and economic opportunity, one third of the world has no access to safe drinking water, one fourth of the world has seen its per capita income actually decline in just the last decade, one fifth of the world lives on less than a dollar a day. Conflicts and tensions abound; the death toll of the local, regional and ethnic wars of the last fifty years is greater than the total losses of life in World Wars I and I1 combined. Surrounded by poverty and pestilence and endemic disease, shunned or ignored by the rich and the powerfbl, for many in the world today, life is a living hell. The question then is, why this paradox in a co-existence of contradictory realities? How does it happen that with all the tools and the knowledge in our hands, we fall so short of the desired results? What prevents us fiom unleashing the pent-up potential in vast populations for the betterment of mankind as a whole? There may be many reasons for the gap between capacity and delivery, but shortage of resources and facilities is not one of them. We have an enormous surplus of wealth in the world, and to the extent that wealth is a fair indicator of the availability of resources, there is obviously no problem in the latter. So, what prevents us then from moving towards a more equitable world?
169
170
The main problem appears to lie in cultural intolerance, and an artificial division of the world into “we” and “they”. Only our own problems and security and incomes and quality of life are important; the rest of the world becomes largely secondary or invisible on our radar screens. In many cases, even our basic knowledge of others, and of their problems and cultures, is sadly deficient. Slogans and sound bites become the basis of our assumptions about others, and with our minds thus made up, we do not then want to be confused by facts. For all of us, knowledge is after all an intensely personal experience. Whether we like it or not, all that we know is limited and contained inside each of our respective brains. We tap into the unlimited external databases of knowledge, but what we retain is only what each one of us soaks up and imbibes from that database of knowledge. So all knowledge becomes subjective in the ultimate analysis. There are no truths, only our own perceptions of the truth. From there, the jump into “self-centrism’’ is automatic. Each one of us has no choice but to place ourself at the centre of the world, since all our knowledge is so completely centred in each individual brain. “I alone am at the centre of the universe; everybody else is outside”. Once we begin thinking from this arrogant perspective, it is easy to extrapolate this self-centred view of the world to one in which the focal point or centre lies not just in each of us, but also in each of the cultures to which we belong, with each culture seeing itself as central or superior, and all others as subordinate and possibly inferior. How easy it then becomes to forget the commonality of our origins and the true history of human evolution. By all scientific evidence, human-kind started somewhere in the RiftValley of East Africa, and it is from there that the historic migrations of history took us across into the Middle East and Central Asia first, and then into the different branches which spread into South Asia, into Europe, and finally across the Bering Straits into North and South America. That is the scientific origin of the human species, and the history of human migrations. Thus we are all Africans in our origins and ancestry. The fact that our geographical migrations out of Africa have subsequently developed into different cultures and ethnicities does not belie our common origins. If there are differences among us, in colour, in language, in food patterns, in thinking processes, these are no more than environmental variations on a central theme that remains common to all of us. Neither our common origins, nor our outward differences should cause apprehension or concern. The latter particularly should be a source of wonder, and should coax us into an effort to learn from each other as we follow the impact of geography on commonalities. Only an intelligent and open-minded study of differences can lead to dialectical analysis, and only dialectics can lead us into the critical thinking that is of the essence in our search for truth. That is unfortunately not how most of us see the problem. The self-centricity and ethno-centricity from which all of us suffer has made each of us feel superior to all others. An effort was made by all religions to bring us back to reality with lessons in humility, but these have remained largely unsuccessful. All are guilty of transgression.
171
In the Far East, we saw China consider itself as the self-contained centre of the world and for many centuries refuse all contacts with all other countries. Next door, the Japanese convinced themselves that they were, in any case, descended directly from heaven, and were thus a class apart, far from the maddening crowd of all other populations. In the Middle East, which saw the extraordinary birth of the three great monotheistic religions of the world, each of these three religions saw itself as the final truth, and looked down with relative scorn on the preceding ones. This is quite surprising, considering that each of these three monotheistic religions believes in the same one God, and each believes in all the Prophets of the past. Be that as it may, these three religions, with all their continuity and commonalities, have been responsible for some of the greatest human tragedies of human history, and have brought a bad name to faith and religion itself. That is even truer in current times, as religious extremists and cult members start disrespecting human life and innocent bystanders, and embark on warped terrorist acts. Meanwhile, Western civilisation has distinguished itself through recorded history with perhaps the worst excesses of arrogance and intolerance. From the initial mobs of the Crusaders, to the dark tortures of the Inquisition, to the inhuman treatment and pursuit of the Huguenots, to repeated Pogroms, to outright Slavery and Segregation, to the permanent blot of Colonialism, to the abhorrent racism of the Nazis, to continued Neo-Imperialism, the list is endless, as is the injustice and heartache that it generated. If only this cultural intolerance was a thing of the past. We could then let historians analyse the where, and the what, and the why. But it continues to fester and remains formative in our thinking processes even today. Consider some of the following examples: One of the important elements of contemporary history lies in the multilateral experience, as exemplified in the United Nations and in other international organizations, both governmental and non-governmental. The basic idea was to replace the rampant self-centred nationalism that had given rise to endless wars over previous centuries, and to two World Wars in the last century, by a system under which decisions of common import would be reached by debate and compromise and consensus, and not on the battlefield. That inevitably required a commitment to peace and non-violence as a “common objective”, and to Rule of Law. The subsequent results are far from reassuring. Take the balance sheet of the most important Member State of the United Nations, one which initiated the very idea of the Organisation, and was responsible not just for preparing the original draft of its Charter, but also for its very establishment, and much of its initial financing. A partial list of the internationally negotiated treaties and conventions that it has not signed or ratified is illuminating. The Comprehensive Test Ban Treaty, the Statute of the International Criminal Court, the Convention on the Elimination of Discrimination Against Women, the Convention on the Rights of the Child, the Treaty on Anti-Personnel Landmines, the Kyoto Protocol on Global Warming, etc. It is perhaps unfair to single out a single country. Contradictions abound here between theory and practice, not just in one, but in many countries. The basic argument in all these cases is one of “selectivity”, namely that laws apply only to others. Each one of us is a king, and since the king is above the law, he
172
should be able to pick and choose the rules that he wants to follow. Such exceptionalism flies in the face of the whole concept of Rule of Law, just as it negates the very fundamentals of Democracy. Some of this may be because of the enormous power that some Western countries have built up, either by the ruthless exploitation of other populations or lands, or by the measures taken to protect and expand their own economic dominance. Some of it may however be due to the mercantilist feeling that only one’s own interests are relevant or important, thus excluding or denying the whole interconnected nature of the world. Consider other examples: The Non-Proliferation Treaty, which is considered as the cornerstone of the international effort against nuclear weaponry, clearly makes a discriminatory definition of the threat, when it allows five countries to retain nuclear weapons, while it asks all others to give up the option to have them. The Security Council of the United Nations gives the same five countries the right to occupy their seats without going through the process of elections, and further gives each one of them a veto which can fly in the face of the political will of a 111 190 other Member States. And we have all been numbed into believing that this is normal and acceptable. Alas, the crimes that are committed in the name of democracy.. . Or consider the continued problems in trade flows. Despite all the talk of free markets and open economies, despite the existence of agreements negotiated and agreed to after years of arduous debate in the GATT and the WTO, trade flows remain seriously constricted by protectionism and subsidies and artificially low commodity prices. Vast sections of the world’s population, in fact its majority, are thus denied the ability to feed themselves adequately, while cats and dogs in developed countries dine on caviar andfoie g u s . Note that each cow in Europe is subsidized at a rate greater than the total per capita income of around two billion people in the world. “If they do not have bread, let them eat cake”. Or take the whole issue of migration. Human migrations are, as we well know, part of the fundamental forces of history. They have transported each one of us to where we are located today. And yet, new immigration laws and restrictions, developed only in the past hundred years or so, now place insurmountable impediments against human movements. In the process, such restrictions constitute a basic denial of the most fhdamental and historic human right to move freely in the search for a decent life. And we talk glibly of human rights. Surprisingly, the most stringent restrictions exist in those countries which tout themselves as champions of human rights and democracy, or pride themselves as “melting pots” where pioneers and immigrants have built up their own achievements in recent centuries. For thousands of years, human history has attempted to move away from the divine right of kings to do no wrong, towards concepts of responsibility to fellow man, and to democracy, and rule of law. That is not a dream; it is the inevitable march of history, of the unfolding of social rights and social responsibility, as human beings move constantly upwards from their lower caveman instincts to higher reasoning. It is also part of the spiritual underpinnings of society, and of all faith and belief. The question then is, is this situation of cultural intolerance and intellectual violence one which is inherent in our self-centred human metabolism, one that we
173
will all have to just accept and live with, or is it permitted to us to question, and hope, and dream, and to propose solutions? The question is more important than it sounds. This is, after all, not an academic argument, but one that is determining much of the actions and situations in which we find ourselves hopelessly enmeshed. We live in a world of enormous cultural diversity, rich in the warp and weft of its texture,'nuanced in its tones and colours. Surely those cultural differences need to be studied and examined with due humility, so that the learning curve becomes enriching, rather than starting from the conviction that all others are inferior, that we alone carry the heavy burden of truth and knowledge, and all others are no more than incarnations of the devil. There is continuing danger that such arrogance will not only engender the type of frustrations and hostile reactions that we have witnessed in the recent past, but also erode the very principles of civilized behaviour that led to the glorious revolutions of the past, and which were then further refined over time into universal precepts and principles. Our modem age has all these principles laid out before it, democracy or the respect for the majority, human rights or the respect for the minority, empathy or the relative importance of the speck and the mote, dialectics or the constant search for truth, dialogue or debate with words and ideas, as opposed to swords and violence. Let history not judge us harshly for the gap between our precepts and our practice. Let wisdom and tolerance not be sacrificed at the altar of arrogance and instant self-gratification. That is the true planetary emergency today. With dialogue and understanding, and empathy and humility, all problems can be resolved. With cultural arrogance and superiority, none of them will.
THE IMPACT OF THE PLANETARY EMERGENCIES ON WORLDWIDE PRODUCTIVITY AND COOPERATION WITH THE INTERNATIONAL SCIENTIFIC COMMUNITY
H.E. PROFESSOR ANTONIO MARZANO Minister of Productive Activities, Rome, Italy 1 - In the last two centuries, under the influence of the Enlightenment and of industrial transformation, there has been an evolution of the prevalent culture. Sensitivity towards the events of nature has declined. This has happened due to, on the one hand, scientific progress and, on the other hand, due to the reduced impact that natural disasters have had on the economy (with the passage from agriculture to industry). For almost two hundred years, the conviction that mankind's control over nature would become ever more complete has distanced the attention of public opinion and governments towards growing imbalances in nature. Peasant culture was much more attentive to, and almost dominated by, external natural events. Industry was not randomly labelled as 'manu'-facturing, i.e., as dependent on the hand (mano) of man and not on what happens in the fields. Not that there was a lack of the occasional upsetting event. Such external events, however unfortunate they might be, were not considered very relevant from an economic point of view. The velocity with which wealth was growing, thanks to industrialization, was more than adequate compensation for the earthly, material damages being incurred. These are remembered not as economic damage, but instead they remain in our collective memory as part of the natural drama of mankind. Sensitivity towards environmental issues remained marginal for centuries. One can say that this theme surfaced at the level of public opinion only in the last thirty years. One can also say that it is within that span of time that one finds the birth of political organizations dedicated, in their statutes, to addressing (not always in a rational manner) the environmental question. Today, sensitivities towards environmental catastrophes are much more acute. It seems to me that the turning point came with the awareness that the stock data regarding accumulated wealth had become more significant than the data regarding the annual production of wealth. The accumulated wealth is enormous, while the rate of the development of annual income has weakened, passing for example from 7-8% in the 1950s to 2% annually. To repair the material damage caused by destructive events that disrupt the huge amount of existing wealth would cost a much greater number of years of income flow. Another stock data came to assume greater importance: environmental damage also accumulates. When an industry produces environmental damage that cannot be repaired as rapidly as it is created, this damage continues to accumulate. A typical example is the greenhouse effect. The stock of environmental damage grows, and it soon reaches the point at which it becomes a collective problem. I would like to add that the urban concentration of the population has also contributed to evidence this problem. The concentration itself sharpens the environmental problem, as contrasted with the case of a lesser density, i.e., of a territorial diffusion of the population and of the productive structure. And so this brings me to explain, from an economic and political point of view, the cultural evolution that we have noted in times that are relatively recent with respect to the greater past.
174
175
2 - What response could one rationally give to the worsening of these problems, and to the fears of public opinion? It does not seem to me that one could propose a return to the past, that is to say, the pre-industrial past. That past, contrary to what anyone thinks, was not at all an Arcadia. Arcadia, on the contrary, never actually existed. The pre-industrial conditions of life were, from any point of view, of great poverty, of great inequity, of great sanitary backwardness. It was a past in which the life of man was short and exhausting. The cities were filthy and dark. The hours of work were intolerable. The average life span was dramatically lower than today. The return to an epoch like this would itself be a catastrophe. It is enough to think about the manner in which the populations of entire continents that have never known industrial development still live today. Many of us visit these places in order to rediscover things that we no longer have: uncontaminated nature, the untainted traditions of life, more time spent for contemplation than for productivity! But we would not choose to live this life. Our curiosity lasts the length of a tourist trip, except in a minority of cases for the most noble of noble people who choose to dedicate their life to alleviate the suffering of local inhabitants. No, the return to the past is not the right choice. This must be said because whoever preaches a return to the past can easily find consensus in those, above all among young people, who are generically unsatisfied and would like a different world. Yes, they want a world that is different, as their saying goes. Different, but not worse. 3 - That being said, we certainly do not undervalue the problems that we have before us, but they must be analysed rationally. The theme is ample and well articulated. At this point I can only offer a few examples. The first example is the problem of water. It is especially difficult to ignore this problem on an island like Sicily, which is often caught up by the problem of water shortage. Furthermore, the year 2003 was declared by the United Nations to be the international year of water resources, thus bringing a great deal of attention to the planetary problem of assuring greater availability and better usage of water resources for the world. First, only about 1% of the available water on our planet is suitable for human consumption. In addition, the growth of pollution that results from agricultural as well as industrial activity tends to reduce the level of effective availability. It would first seem that we have to deduce the existence of a structural insufficiency of this vital resource with respect to the expansion of human needs. But this resource is also renewable at a very high frequency although a large part of it is wasted. It is estimated that about half of the water introduced into the water system is lost due to a scarcity of installations. We need instead to consider the perspective of the growth of needdrequirements; these depend largely on population dynamics, the heightening tenor of life and the expansion of industrial and agricultural usage. Agriculture, in specific, absorbs about 75% of the treated water from rivers and stratum; not only is water used intensively for cultivation, but very few techniques to save water and to control its dispersion are used. In light of these factors one could presume that in the future this will help to create a rising, demand-based pressure on a quantity of resources that tends to be stable. All else aside, the statistics relating to the American economy, which represents one of the world’s highest levels of consumption per inhabitant, indicate
176 that the intensity of the neearequirement in relation to population dynamics and to income levels has been in significant decline since the 1980s. The fact is that the American economy is now making better use of water, both in industry and in the family. Therefore, one cannot sustain the position that it finds itself facing a structural tendency towards the scarcity of this vital resource in relation to national needdrequirements. In reality we find that, in many parts of the world, water resources are poorly managed and poorly utilized in relation to different destinations of use. Being an article of vital consumption, in different areas water tends not to be treated as an item that is subject to the laws of supply and demand. Making it freely available to everyone has, in many countries of lower per-capita income, produced results that are the opposite of those considered desirable. It has been deduced, in fact, that if a country invests very little in the development of resources, it tends to make very poor use of what little is available. As a result, the poorer classes end up having to sustain the relatively higher costs of buying water from private vendors. The consequences are shown in precarious living conditions, lack of sanitation services and high incidence of illnesses due to lack of hygiene (according to some estimates, 60% of illnesses are caused by deficiencies in the water system), all because these countries remain embroiled in the trap of economic underdevelopment as well as traps created by human beings themselves. Among the types of intervention that may help to confront this problem, the first in line is political tariffs, both on prices and for the use of water. The determination of prices cannot have a character of consumer selection on the basis of income, treating it as a vital necessity for everyone. Nevertheless, it should function as a social sharing of the considerable costs of investment and of resource management. It should also function as an incentive-setter and optimiser of work in order to help adjust for particular practices that may be deep-rooted in the population (e.g., certain irrigation techniques for agriculture), but that are not very efficient. There is, in particular, an ample margin for reducing the extent of the investment for water in agriculture, and price mechanisms are suitable in this context. The private sector has an important role to carry out in improving the access to water resources. This sector can participate in financing investment projects, in infrastructure construction and in improving the management of supply services and the treatment of water. Considering the risk that, treating it as a natural monopoly, private interests can prevail over those of the community, it is indispensable to establish an entity of public regulation with well-defined regulatory outlines and entrusted with sufficient power. It must be publicly accountable for its work, being subject to the consequences of judiciary intervention when the community is dissatisfied with its work. The second example is that of energy. Energy is a fundamental factor of development, an essential key for continually improving the quality of life. The availability of energy is an indispensable condition for the growth of both developing countries and emerging countries, whose actual per capita consumption averages only 9% of that of North America and 16% of that of Europe. Energy, on the other hand, in all its forms of transformation and utilization, is responsible for a significant proportion of the emissions of environmental pollution; so here we find that we also have to confront one of the planetary emergencies, the change in the climate, with the simultaneous awareness that we will not be able to reduce the use of energy.
177
In the foreseeable future and the technological perspective, according to the World Energy Outlook 2002 (WEO) of the International Energy Agency (IEA), global energy demand between 2000 and 2030 will increase by over 50%, mostly due to the growth of emerging economies. If, as indicated in the "reference scenario" of WEO, this increase in demand is primarily met with fossil fuels, we must expect an increase in the emissions of carbon dioxide by more than 50% with respect to current levels, with a probable growth in risks for the climatic balance of the planet. It is considered inevitable, at least until 2030, that the increase of emissions provoked by the growth in consumption of fossil fuels in the short-medium term will be attributable most of all to the greater needs of emerging countries. On the other hand, no one can speculate that the protection of the climate can be insured from hypothetical economic crises in China, India and other emerging economies whose determined competition increases the global consumption of energy with its consequent growth in emissions. Growth that, for its part, could not be balanced by the reduction of 5.2% in emissions of more developed countries, as agreed upon in the Kyoto Protocol. However, to ensure the stabilization of the concentration of COz at safe levels before the end of the century, beginning in 2020 the response to the demand of energy will have to be based on the ever more diffuse use of renewable sources, of hydrogen technologies and of combustible cells, of "clean" and highly efficient technologies for the use of fossil fuels and of technologies for the "confinement and sequestration" of carbon dioxide. This is the scenario of the so-called "decarbonisation" of the economy. Hydrogen merits a discussion of its own, for it could be an "ideal" vector for local environmental impact. In the scientific sector there has been talk of hydrogen for over 25 years, but now it has passed into the sphere of serious research, development and demonstration, and the recent initiatives of the European Commission and the USA Administration have given credibility to such a hypothesis. The success of hydrogen, which does not exist naturally in its free state, will obviously be linked to the production of renewable sources that are either nuclear or from carbon with relevant "confinement and sequestration" of carbon dioxide: only in this way will the virtuous cycle towards which all countries must tend be primed. During the transition period (not short) towards this objective, which calls for a great investment in research, all activities linked to hydrogen technology will constitute an essential part of the know-how to acquire for the final half. At this stage there results a fully coherent activity that links the development of combustion cells, hybrid motors, the production of hydrogen from carbon with the capture and confinement of carbon dioxide, and new hydrogen turbines. I would like to remind you that recently Italy took part in an internationally organized effort, launched by the United States, referred to as the "Carbon Sequestration Leadership Forum" and the first project was linked up, the "Power Gen" (project of research and development for the production of electricity and hydrogen from carbon with almost no emissions). But to return to the complex energy picture, in addition to intervening in production, one can act also on the front of energy demand. Here priorities converge on the efficiency of the final use, just as it did for supply services. The opportunities for intervention are very different from country to country. Italy, for example, is considered to be an efficient country because of its contained "energetic intensity" (energy consumed per unit of gross domestic product).
178
There still remain many opportunities that need to be taken advantage of, however, such as the sectors of consumption in which it is possible and necessary to increase efficiency, and rationales already being employed that need to be pushed harder. Towards these ends, we are launching an interesting instrument called "white certificates", which are negotiable on the market like securities for ever more virtuous consumers; it is an initiative that parallels that of the "green certificates" that have already been launched to promote renewable sources. In the end, however, it is important to keep in mind that the diversification of supply with respect to fossil fuels demands an extraordinary effort of research and innovation, a true technological "shock", in order to create new and economically practical sources of "clean" and safe energy. I would like to underline the decisive role of research and technological development in confronting the planetary emergencies that we address here. Until now, in fact, I have spoken more than anything about how to best manage the problem of climate change due to the emission of greenhouse gas "sic stantibus rebus", with only a hint of the technological innovations that would allow us to confront the essence of the problem in order to resolve it at its roots. To do this in a way that science, technology, and the availability of qualified human resources are able to constitute a decisive tool for truly overcoming these planetary emergencies, it is opportune to point out the significance of research on supply and demand, and this is also the homework of governments. The third example that I would like to refer to is that of world hunger. It is a humanitarian catastrophe that strikes millions of people, 50% of which are children under the age of 15. The West does not do enough to eliminate or even to significantly attenuate this catastrophe. The problem is aggravated by the fact that our aid is often wasted by corrupt political regimes and by their propensity to use resources only for military purposes. In this context, technological research has proposed the Genetically Modified Organisms (GMOs). But the GMOs are typically debated in an ideological manner without any practical indicators. It is a debate brought to the forefront by trends that consider, in general, the advancement of technology to be a negative fact or that, for ethical motives, are opposed to the modification of the DNA of plants. Today, GMOs are being attacked in the same manner that the atom once was. Economics suggest, in reality, that we evaluate this and other similar problems based on the method of costs and benefits. If the benefit is represented by freedom from hunger, (and from the ecological viewpoint, the reduced use of pesticides) it seems difficult to me to find arguments of greater ethical value. There are costs, naturally, and of two varieties. The first is the protection of consumers, but I maintain that this could be overcome with clear information indicating the presence of GMOs and of scientific guarantees of safety. The other component is the high cost of production of the GMOs, especially for cultivation in Africa: here, there is hope that the latest technological advances are capable of reducing some costs such as, to begin with, those related to irrigation. These are real problems. But a prejudiced attitude towards this perspective must be rejected, both from an economic and a moral point of view. The President of the "Pontificial Council for Justice and Peace" has expressed this in exactly these terms. The Italian government proposes to raise the problem during its Semester of Presidency of the European Union.
179
In conclusion, the problem of Planetary Emergencies probably cannot be completely removed, in all its manifestations of discontinuity in the balance of nature, from the human condition. Everything aside, it can be confronted in a more or less efficient mode that is based on a few conditions. The first two are political. Culture, before everything: I mean a sensitivity that is shared in public opinion, and over time a propensity to discuss these issues with a sense of realism and without ideological prejudices. This cultural transformation is the first objective for politicians. The second condition is Democracy: tyrannies are about other objectives, and they are not sensitive to planetary problems. The second group of conditions is technical in nature, and they are mostly economic. It is necessary to use the economic method of cost-benefit analysis. In relation to costs, one must first evaluate and then utilize the compensations that are acceptable and that appeal to that part of the population that is most directly interested in the best current technologies. This holds good in general for the energy sector, for example, just as it does for toxic or nuclear waste or GMOs. If the inhabitants of a certain area accept to host toxic waste, it is right that those from all o f the other areas thus freed from the problem confer on the first ones the benefit of compensatory resources. The last condition is the development of scientific research and technology. Only this path can provide us with significant practical responses. This is also true more generally, beginning with the monitoring of astronomic, volcanic, marine, flooding, and climatic phenomena, as well as re-emphasizing energy, water, poverty, hunger and epidemics. Humanitarian and economic damage is enormous: 35 billions of dollars in 2001; 55 billions of dollars in 2002. The financing of research in these sectors should be proportional, with a coefficient of a probabilistic type. Allow me to say at this point that the true Planetary Emergency is the opposition to scientific and technological progress; and, in second position, but not very far behind, the far too short time horizon for the actions of governments.
WAR ON TERRORISM: A SEARCH FOR FOCUS VICTOR KREMENYLJK Institute of USA Studies, Russian Academy of Sciences, Moscow, Russia ABSTRACT The war on terrorism has become an obsession for governments and the public at large. A tremendous amount of time and money is spent on what seems, from the point of view of conventional wisdom, to be a relevant and urgent policy task. But in reality what is done is rather far from the desired goal and may even contribute to the further growth of terrorism. INTRODUCTION What is terrorism? Which type of terrorism is currently discussed? The current literature is rich in descriptions and explanations of terrorism. From it we may get a general notion that terrorism is a policy/strategy that uses intimidation, threat, blackmail as a tool, as a method to reach success. A more specific description is so far unavailable. What we have instead is the operational description. But even in this case terrorism may and should be regarded in two main aspects. The first, “legitimate” terrorism because it is used by formal structures (governments, police, armed forces) in order to force their opponents to adhere to some international or domestic rules. The best and most demonstrative case in international relations is the nuclear deterrence which played the central role in the days of the Cold War and continues to do the same today in some dyads: Russia-USA, USA-China, RussiaChina, India-Pakistan. The policy is abhorred and rejected by many, but exists officially as a national security strategy. In domestic affairs the same “terrorism” is used as a method of intimidation of criminals and would be criminals. The method is sanctified by millenniums of existence. The other dimension of terrorism is the illegal use of intimidation and threats, its use by the outcasts, by the criminals or revolutionary groups as a tool in their struggle against oppression. This type of terrorism has the same long history as the first type and has, among its most spectacular cases, things as ancient as the assassination of Julius Caesar by conspirators or as important as the assassination of Archduke Franz Ferdinand. Evidently, in all current discussions of “terrorism”, the second notion is the focus of attention. But in discussing it we must remember always that “terrorism” as a policy, as a tool, as a mechanism, was invented by the authorities and, to a large extent, it continues to exist namely because it is a “legitimate” method of forcing the people to comply with existing rules and laws. This aspect is important because it completely contradicts the ideas of the “contrat social” which are regarded by many as the essence of social organization, while in reality it is often the brutal threat of force which forms the consensus in society. Understandably, this should in no case be regarded as justification of the “second” terrorism. Every act of violence provokes a feeling of disgust among psychologically sane people. Including the use of force by those who claim that their violence is only a “response” to official violence. But for the sake of fairness, as well as for 180
181
the sake of the validity of the analysis, we must understand that the disgusting world of terrorism consists of two pillars and the first one, official terrorism, is not only of the same origin as the second, but it also creates the conditions which give birth to “illegal” terrorism. WHAT IS “ILLEGAL” TERRORISM? IS IT WHAT WE STUDY? “Illegal” terrorism, the use of violence by criminal or other opposition forces in response to coercion by governments or as a means of promoting their goals irrespective of the government’s actions, is a rather ancient problem. There were always people and conditions in which violence was legitimized and regarded as appropriate and even necessary by extremists - political, religious, traditionalist. Sects of assassins existed in Islamic countries, different groups of carbonarias or other revolutionaries as well as Mafiosi existed in the enlightened nations of Europe. Russian nihilists and narodniki turned the life of Russian society in late 19th century into a nightmare, killing even the Tsar Alexander 11. This type of terrorism appears to have been born of the conditions of the struggle between the upper and lower groups of the society, when socialDarwinism was regarded as a norm for social relations between different classes. Actually, Marxism was, in a way, one of the offspring of this major social trend because it has proclaimed revolution and dictatorship as a means to struggle for justice and equality. Terrorism, from this point of view, is a blend of some specific psychological approaches and feelings of despair and hopelessness, which very often accompany the life of millions. Simple poverty leads to the growth of crime when people hope to make their life better though illegal acquisitions of other people’s life or property. What leads to terrorism is poverty magnified by injustice, by demonstrative unfairness of life, which not only humiliates the oppressed but leaves them without any hope for justice and a better life. So, what we discuss as a major issue in global emergencies is a product of a certain policy and social structure magnified by psychological deviations caused by religion, ideology and human feeling. This is a complicated phenomenon that has no simple solution and in no case should be regarded as a police operation as it is treated in the official strategies of the governments.
WHY IS THIS SO IMPORTANT? Besides the threat to the lives of important people or important objects (palaces, monuments, infrastructure), terrorism has always played an important role in challenging the basic grounds of government. It has always been aimed at challenging the authorities and demonstrating their weakness, their inability to defend themselves and their people, their vulnerability. This was evidently the terrorists’ major “sin” in the eyes of the authorities, though very often the major argument in their reaction to terrorist attacks was genuine or false sympathy for the victims, to the innocent people who suffered fiom terrorists. For the terrorists it was more than important to demonstrate not only their rejection of the common rules and laws, not only their challenge to the established norms of respect of human life, but their ability to challenge the authorities, to demonstrate to what extent they were weak and unreliable. It was always mindless
182
competition between official power and its enemies, the terrorists, where human lives and human feelings were the major currency. The difference between terrorist activity today and traditional terrorism is mainly the fact that current society has become too vulnerable and too fragile. Traditional medieval structure of society was extremely robust because it was based on independence and the autonomy of its elements. Each village, manor, or city was essentially independent, able to satisfy its basic needs, and could survive even if almost all its ties with the outside world were severed. The best examples of it are the Boers in South Africa who emigrated in the 16th century and until the 19th century had almost no contact with the outside world but still survived. Current society is built on another principle. Instead of autonomy and the independence of its units, it is based on the division of labor, market communication and interdependence. It makes links between different geographical and functional parts of the society both extremely important for the life of society as a whole, and extremely vulnerable because they cannot be protected completely from terrorists: air communications and airports, railways, roads, sea and river-ports and others. A blow to communications, including transportation, electric lines, gas and oil pipes, always has a tremendous resonance and is extremely painful for governments and societies. The other side of current terrorism - hostage-taking, threats to the common people (poison gas, radiation, epidemics, etc.) - also has a special effect because of the changes in values, because of the growing price of human life in the eyes of society, and its validity for economic life. Today’s terrorism deals a double blow to society: victorious barbarians make people feel desperate and question the basics of their life. There can be no doubt that terrorism is a significant threat to humankind and it may acquire even more dangerous proportions in the future if it acquires weapons of mass destruction, if it hits major communications or major cities, if it gets a possibility to use outer space. The task of fighting terrorism becomes as important as was the task of putting an end to Cold War brinkmanship. WHAT SHOULD AND CAN BE DONE? WHAT SHOULD NOT BE DONE? Terrorism is a sign of some serious disease in the society. It should be treated as such and, for the purposes of curing this disease, a certain strategy must be worked out. As a beginning for this strategy, two different dimensions should be outlined. WHAT ARE THE SOCIAL AND PSYCHOLOGICALMECHANISMS THAT PRODUCE TERRORISM? And what can be done to correct these mechanisms or, maybe, to destroy them: inequality, poverty, the isolation of separate groups (ethnic, religious), humiliation as a policy, denial of human rights and all the other elements that exist, which are very often detected and described by human rights watch-groups but are ignored by governments and the public. The international system is only at the beginning of the quest for human rights and the rights of minorities. The necessary documents exist but there is almost no adequate means in forcing the governments and other nations to comply
183
with these documents. The notion of “national sovereignty” is very often used as a shield against legitimate and founded inspection of the situation in problem areas: Chechnya, the Balkans, Cyprus, Israel and Palestine, Libya, and many other cases where there are problems with justice and fair treatment of the people. However, this is not simply in response to our natural feeling of sympathy towards the oppressed; this is a strict demand of the whole human system that wants stability and protection from destructive variables. The purpose of this anti-terrorist strategy would be to reduce, as far as possible, the favorable environment for terrorists - sympathy of the people, inflow of volunteers, supply of information and protection. This can be done through a combination of social-psychological campaigns with strict and clear legislation that will differentiate between terrorists and people who help them (voluntary or involuntary), with the general purpose of stripping the terrorists of the image of “fighters for justice” and at the same time controlling those government mechanisms which sometimes multiply terrorists due to excessive brutality. The other part of this strategy is the struggle against terrorism as such. It is a crime. And we should remember that for millennia authorities have fought crime without definite success. Crime existed 5000 years ago; it exists today. There is something in human nature, as well as in the nature of the human society, which makes crime an inalienable element of life. The same may be said about terrorism. There should be special anti-terrorist units in the law enforcement agencies. There should be a global network to observe terrorist activities and collect information on them. There should be a well-grounded selective strategy that would sec thc difference between the terrorist activities and their followers. Punishment for terrorist activities should be equal to the gravest abuses of law: homicide, hostage-taking, intimidation, damage to the public and private property. But one point should be strongly made: terrorism should not be regarded as a propagandistic campaign and a political tool. In Russia, in Israel and in the USA, the governments have decided to actively use terrorism, or what they label as “terrorism”, as a means of seeking public support for their activities. For these purposes they over-exaggerate one side of terrorism - the military side - and present the whole case as a demonstration of their “will”, “decision” and “power”. Chechnya, Afghanistan, Iraq and Palestinian territory were indiscriminately bombed because of “terrorism”. Iran, Georgia and Libya are being threatened with bombing because of their “softness” on terrorism. In all cases it is no more than an attempt to use legitimate concern about terrorism as a means to gain political success by fostering people’s anxieties (very often with the help of the government’s propaganda). A legitimate war against terrorism was turned into an illegitimate political campaign that led to wars and domestic conflicts in which governments act as terrorists.
IRAQ AFTER SADDAM: AN IRAQI PERSPECTIVE PROFESSOR HUSSAIN AL-SHAHRISTANI Iraqi Rehgee Aid Council, London University of Guilford, Guilford, UK
In 1979, I had to make a choice: either work for Saddam on his nuclear weapon programme, or pay the price. The choice was simple, and the price turned out to be reasonable: 11 years and 3 months in prison. I returned to Iraq on 7 April 2003, two days before the fall of Saddam’s regime, on a humanitarian mission. First I had to go to the Abu Ghraib prison, where I had been imprisoned, to look for fellow political prisoners. None were found. I then went to look for them in mass graves. Tens of thousands of mass graves were uncovered; some held a few hundred remains and others had many thousands. Only a few people could be identified. On our humanitarian mission we visited many towns and villages and talked to common people about their hopes, expectations and dreams. Despite the diversity of Iraqi society, one common theme emerged. An Iraqi woman told us: “These three decades (under Saddam’s rule) were very hard. The first decade melted away our fat. The second ate the flesh. The third crushed the bones. But we are determined to keep our heads up.” An Iraqi man said: “Saddam tried to destroy the goodness of the Iraqi people. We must prove that he has failed.” A common commitment was: “Never again another dictator.” We discussed the current situation and the aspirations of the Iraqi people with the community leaders, religious leaders and intellectuals. It soon became very clear that the Iraqi people themselves must be engaged in a political process where they can take part in deciding their future and choosing their government. A simple road map is to hold elections for a founding assembly where all Iraqis, irrespective of their race, religion, sex or ethnic origin, can choose their representatives to this assembly. The assembly would then appoint a committee of experts in constitutional law to draft a constitution. This draft would be discussed, amended and adopted by the assembly. Then the draft constitution should be put to referendum where all eligible Iraqis would vote. Without such a constitutional process, Iraqis cannot be assured that their basic human and political rights are respected. They have a deep fear that another dictatorial regime might emerge in their country. Election of the founding assembly and holding of a referendum on the constitution must be held under UN auspices to ensure fair elections. The food ration registry that is currently being used for food distribution to every Iraqi family has complete data of
184
185
people residing in every household and can be used for electoral registry. Failing to engage the people in a political process will further destabilize the country and provide fertile grounds for Saddam’s intelligence apparatus to recruit zealot youth to carry out terrorist acts as we have just witnessed in the tragic attack on the UN compound in Baghdad.
HOW THE INTERNATIONAL SCIENTIFIC COMMUNITY CAN HELP The international scientific community can play a very important role in helping Iraqis to overcome their tragedy and rebuild their lives and their country. Rebuilding institutions of higher education, universities and research centres is an important and urgent task. World universities, research centres and institutions can help Iraqi scientists through exchange visits to update their knowledge in a range of scientific fields. Iraqi universities have been cut off from the rest of the world for thirteen years. Most of the universities and research centres in Iraq were looted during the war and need to be resupplied with laboratory and research equipment and instruments. They also lack scientific journals. If international scientific journals cannot be provided directly, then Internet access to them would be very helpful. Two generations of Iraqi youth have been deprived of university education and have been forcibly recruited into military service. Rehabilitating hundreds of thousands of these youths by training them in different skills is absolutely crucial. Many of them would like to pursue a university education but it is impossible to physically place them in classrooms. Distance learning over the Internet is perhaps the most feasible and economical way of providing them with an opportunity to pursue a university career. Collaboration of various universities with Iraqi counterparts could bridge this gap and provide such an opportunity. Another area in which the scientific community can help Iraq is in providing sound scientific advice in cleaning areas contaminated with chemical, biological and radioactive waste. Saddam’s regime has produced large quantities of chemical and biological warfare agents and used them over the years in Iraq. Also depleted uranium was used near Basra during the Desert Storm operations. Radioactive waste has also leaked out from nuclear laboratories near Baghdad. A coordinated world effort is required to examine different contaminated areas in the country and assess the dangers to the population. This is an area in which Iraqi definitely needs the help of the international scientific community. SOUTHERN IRAQI MARSHES Restoration of the southern Iraqi marshes is another area in which international collaboration is required and the contribution of the scientific community is essential. The marshes were the cradle of human civilization and supported a unique culture and way of life that had continued almost unchanged since Sumerian times (3000 BC). Then these marshes were destroyed by Saddam’s regime. They were drained during 1992-95 and turned into a salty wasteland.
186
The marshes: Yesterday
Today
Below are satellite images of the marshes before and after drainage
1972
2002
187
The southern Iraqi Marshes are of global importance: For wildlife and biodiversity (81 species of waterfowl, which are rare or endemic); One of the most important wintering grounds for wildfowl in southwest Asia; They support almost the entire world population of two species: the Basra Reed Warbler and Iraq Babbler; Important natural resources for Iraq and for the people beyond Iraq's frontiers. Restoration of these marshes is a human, cultural and environmental concern, not only for the Iraqi people but for humanity at large. THE IRAQI ACADEMY OF SCIENCE The Iraqi academy was established in 1947. Saddam politicised the academy as he did with all other public institutions in the country. Under that regime scientists had to flee the country to avoid working on programmes to develop weapons of mass destruction. A group of recognised Iraqi scientists have set up a committee to revive the Iraqi academy. The mission of the new academy is to promote natural and applied sciences for the service of the people and country and to revive Iraqi creative talents for the good of humanity. Science is to be interpreted in its widest sense to include all natural sciences as well as engineering, technology and medicine. It is not to be a research establishment but rather a body of distinguished scientists dedicated to employing their talents for the advancement of science in the service of the people and country. The aims of the academy as outlined by the committee pursuing this goal are to: promote and strengthen science in Iraq in the service of the people and rebuilding the country; highlight science as part of the heritage and culture of Iraq; develop an ethical framework for the application of science for the benefit of the people and the country; attract and retain the best scientists and provide them with facilities to enhance their scientific contribution; ensure that Iraqi scientists engage with the best science around the world; promote science as a vital component of education at all levels; and provide independent scientific advice to the public and private bodies and encourage dialogue with the public. Support of the world academies, scientific institutions and the world scientific community is needed to help the Iraqi academy through its initial stages.
This page intentionally left blank
7.
AIDS AND INFECTIOUS DISEASES: ETHICS IN MEDICINE
This page intentionally left blank
HEALTH AND SECURITY SEVERE ACUTE RESPIRATORY SYNDROME ( S A R S ) : TAKING A NEW THREAT SERIOUSLY DR. DIEGO BURIOT WHO CSR Office, Lyon, France
The emergence of SARS is the second major event of the 21Stcentury to change the perception of the infectious disease threat in the eyes of politicians and the general public as well as public health professionals. The deliberate use of anthrax to incite terror, which quickly followed the events of 11 September 2001 in the U.S., was the first one. Prior to this event, the emergence of new diseases - and most especially, the devastation caused by AIDS -had sharpened concern about the infectious disease threat as a disruptive and destabilizing force, and given it space in national security debates. Since the mid-I970s, more than 40 new diseases capable of causing infection in humans have emerged. With the notable exception of AIDS, most of these new diseases, and older diseases that have established endemicity in new areas, have features that limit their capacity to pose a major threat to international public health. Some diseases, such as Escherichia coli 0 1 57.H7 and variant Creutzfeldt-Jakobdisease, depend on food as a vehicle of transmission. Diseases such as West Nile fever and Rift Valley fever that have spread to new geographical areas require a vector as part of the transmission cycle. Still others, such as Neisseria rneningitidis W135 , and the Ebola, Marburg, and Crimean-Congo haemorrhagic fevers, have strong geographical foci. Although outbreaks of Ebola haemorrhagicfever have been associated with case fatalities of 53% in Uganda and up to 88% in the Democratic Republic of the Congo, person-to-person transmission requires close physical exposure to infected blood and other body fluids. Moreover, patients who are infected with Ebola virus during the period of high infectivity are visibly very ill and too unwell to travel. On 12 March, WHO alerted the world to the appearance of a severe respiratory illness of undetermined cause that was rapidly spreading among hospital staff in Viet Nam and Hong Kong. Three days later, on 15 March, a second mode of transmissionbecame clear: the new disease was traveling along major airline routes to reach new areas with great speed. Hospitals in Singapore and as far away as Toronto, Canada had begun to see cases. Alarmed by these events, WHO issued a second stronger warning later in the day, and gave the new disease its name: Severe Acute Respiratory Syndrome. The global outbreak of S A R S moved into the spotlight of intense international concern, where it would stay for almost four months. In contrast to other epidemics, S A R S has features that make it a particularly ominous new threat. Its initial features - a concentration in hospitals and rapid international spread were cause enough for alarm. More disturbing facts quickly became apparent. Unlike many new diseases that bum out rapidly, S A R S was readily transmitted from person to person, with no weakening of its severity from one generation of cases to the next. At the start of the outbreak, the causative agent was unknown, as were its origins. No vaccine is available and no existing treatment has proved effective. Early diagnosis remains based on signs and symptoms commonly seen in a host of other diseases. Faced with these challenges, the health and medical professions were forced to rely on control tools from the earliest days of
191
192
empirical microbiology: isolation, contact tracing, quarantine, and travel restrictions. The burden on health systems was enormous, further amplified by the number of hospital staff who became infected and the significant proportion of patients requiring intensive care. Certain life-saving procedures, such as artificial ventilation, intensified the risk of nosocomial transmission considerably. The first cases of SARS are now known to have emerged in mid-November 2002 in Guangdong Province, China. From November to January 2003, small outbreaks, each linked to an index case, occurred in seven cities in the Guangdong Province. Initial studies suggested a link between the index cases and occupational exposure to wild animals, and the civet cat in particular, consumed as human food or the markets where these animals were sold; this finding has not been substantiated in subsequent studies. S A R S was carried out of southern China on 2 1 February, when a medical doctor who had treated patients in Guangzhou and was himself suffering from respiratory symptoms checked into the Metropole Hotel in Hong Kong. Via mechanisms not yet understood, he transmitted the S A R S virus to at least 16 other guests and visitors. They carried the disease with them when they returned home to Toronto and Singapore, traveled on to Hanoi, or entered hospitals in Hong Kong. Doctors and nurses in Toronto, Hong Kong, Hanoi, and Singapore, unaware of the need to isolate patients and protect themselves, became the first victims as they struggled to save lives. Six months later, when WHO on 5 July announced that Taiwan, where the world’s last known probable case of SARS had been isolated 20 days earlier, had broken the chain of person-to-person transmission, a total of 8439 probable cases and 812 deaths have been reported throughout the world. That achievement marked the end of an unprecedented collaborative effort to halt the global emergency caused by a new disease. It also marked the end of a six-month period in which a disease dominated the news and demonstrated its capacity to cause damage in ways that went far outside the field of public health and far beyond the areas most heavily affected. SARS caused a level of social disruption and economic damage rarely linked to a health problem. Stock markets moved up or down according to the latest success or setback in the SARS situation. Major transportation and trade hubs were rendered quiet by a disease whose public face came to be symbolized by a mask. S A R S caused the closing of hospitals, schools, businesses, and borders. Public anxiety was expressed in the eruption of riots, mass population movements from affected cities, and unwarranted discrimination. Although much about the disease - including its future evolution - remains to be elucidated, several important lessons have already emerged. These lessons have been brought into sharp focus by the level of concern about SARS, which has also propelled exceptionally rapid solutions to some long-standing problems. Experiences in the containment of S A R S will be particularly useful in preparing for the next new disease, the next influenza pandemic, and the possible deliberate use of a biological agent to cause harm. The first and most compelling lesson concerns the need to report, promptly and openly, cases of any disease with the potential for international spread. In a globalized, electronically interconnected world, attempts to conceal cases of an infectious disease, for fear of social and economic consequences, must be recognized as a short-term stop-gap measure that carries a very high price: loss of credibility in the eyes of the international community, escalating negative domestic economic impact, damage to the health and economies of
193
neighbouring countries, and a very real risk that outbreaks within the country’s own territory can spiral out of control. The second lesson is closely related: global alerts, especially when widely supported by a responsible press and amplified by electronic communications, work well as a preventive strategy. Following the alerts, all areas, with the notable exception of Taiwan, experiencing imported cases were able to either prevent any further transmission or keep the number of locally transmitted cases very low. Travel recommendations, including screening measures at airports, also appear to have been effective. Data on in-flight transmission of S A R S have implicated four flights in the exposure of 27 probable cases, of which 22 occurred on a single flight from Hong Kong to Beijing on 15 March. Following the implementation of recommended screeningmeasures, as advised on 27 March, no cases associated with in-flight exposure have been reported. Travel advisories have also given areas a benchmark for quickly containing SARS and then regaining world confidence that an area is safe from the risk of S A R S transmission. S A R S further demonstrates the decisive role of political commitment at the highest level. Viet Nam, which became the first country to break the chain of transmission in late April, showed how a developing country, affected by an especially severe outbreak, can triumph over a disease when reporting is prompt and open, commitment extends to the highest political level, and WHO assistance is quickly requested and fully supported. This success, subsequently repeated at all the known outbreak sites, demonstrates that S A R S can be contained, despite the absence of robust diagnostic tests, a vaccine, or any specific treatment. When awareness, commitment, and determination are high, even such comparatively primitive control tools as isolation, contact tracing, and quarantine can be sufficiently powerful to break the chain of transmission and cut off opportunities for further spread. On the positive side, progress in understanding the science of S A R S has been unprecedented. The urgency of S A R S challenged WHO to set in motion high-level scientific and medical collaboration. Within a week following the first global alert, WHO established three ‘’virtual’’ SARS-dedicated networks of virologists, clinicians, and epidemiologists to ensure a continuous research effort equal to the magnitude of the SARS emergency. One month after 11 leading laboratories joined the WHO collaborative effort, participating scientists collectively announced conclusive identification of the S A R S virus. Complete sequencing of its RNA followed shortly. This success is an encouraging sign of the willingness of the scientific community to set aside academic competition and collaborate to combat a shared threat. Of great importance, SARS has exposed serious weaknesses in health systems around the world. The disease places an enormous burden on health services in terms of measures for infection control, facilities for isolation, long periods of intensive care for a significant number of patients, and the demands of contact tracing and follow-up or quarantine. Even in areas with highly developed social services, the burden of coping with S A R S , including the number of hospitals with patients and the high number of health workers who became infected, often brought health systems to the verge of collapse. The costs of such care have likewise been enormous. Monitoring the evolution of S A R S has been hindered by the weak capacity of many national surveillance systems to provide detailed information daily. Data on age, sex, date of onset of illness, symptoms and signs, laboratory and clinical findings, and details of
194
treatment and outcome are needed to further understanding of any rapidly evolving infectious disease threat. When surveillance in individual countries is strengthened along these lines, it generates the knowledge needed to support sound control measures and thus enhances prospects for global containment. The magnitude of the response demanded by SARS came at the expense of other diseases. Virtuallyno countryhad adequate surge capacity to cope with the S A R S caseload, especiallysince health care workers -the frontline troops at risk - were themselves frequent victims of the disease. Some hospitals were designated as solely for SARS. Others were closed. Still others were hastily organized almost overnight. Capacity to manage other medical emergencies shrank to dangerous levels. Diagnoses of serious epidemic-prone diseases, such as dengue, were missed. In China, programmes for the control of TB and AIDS and for childhood immunization were halted as S A R S usurped all available staff and resources. The shortage of expert staff to coordinate national and global responses to a rapidly evolving public health emergency is also an issue needing urgent attention. As a social phenomenon, S A R S provides a particularly striking example of the degree of public panic, social disruption, and economic losses caused by a severe and poorly understood new disease that threatens any country having an international airport. In the early weeks of the outbreak, S A R S competed with the war in Iraq as the top international news story. Schools and borders were closed and thousands of people were placed in quarantine, often enforced by surveillancecameras and military troops. Internationaltravel to affected areas plummeted by 50% to 70%. Hotel occupancy dropped by more than 60%. Many businesses, particularly in tourism-related areas, failed, while some large production facilities were forced to suspend operations when cases appeared among workers. Initial economic losses were estimated at U.S.$30 billion for the Far East alone. As a highly publicized, visible, and greatly feared disease, S A R S has stimulated an emergency response on a scale that has very likely changed public and political perceptions of the risk posed by all emerging and re-emerging infectious diseases. Just as Ebola came to symbolize the fear inspired by a new disease, S A R S vividly depicts a truism of the infectious disease situation in a highly mobile, interconnected world: an outbreak anywhere places every country at risk. The containment of S A R S - or any other epidemic-prone disease requires unprecedented solidarityand makes such an effort a matter of self-interestfor every nation. Much more research is also needed before scientists can make any confident predictions about the future of S A R S and the conditionsunder which human infection with the virus could recur. Like the Ebola virus, whose origins have never been discovered, the SARS virus could hide in some animal or environmental reservoir, only to resurface once conditionsagain become ripe for spread to its new human host. S A R S might also behave like many other respiratory diseases of viral origin, dying out as heat and humidity rise and then returningwhen the season turns cooler.As another possibility,person-to-persontransmission might still be occurring undetected somewhere in the world, but at a level so low that it defies detection until the disease once again flares up in an outbreak. The S A R S experience has some lessons about the importance of international collaboration and strong but politically neutral global leadership. Though exceptional in terms of its impact, severity,rapid international spread, and many puzzling features, S A R S is only one of around 50 internationallyimportant outbreaks to which WHO and its partners in the Global Outbreak Alert and Response Network respond in any given year. The high level
195
of medical, scientific, political, and public attention focused on S A R S is helping the world to understand the severity of the infectious disease threat and the importance of international solidarity in the face of this threat. Following the end of the Cold War, when tensions were polarized by the superpowers and kept on edge by the nuclear arms race, attention has increasingly focused on threats to national and global security arising from events that undermine state stability or contribute to state failure. Such events include civil unrest, internal conflicts, mass migration of refugees, localized wars between neighbours, and infectious diseases, most notably emerging and epidemic-prone diseases. Due to the Anthrax cases in the U.S. in the late 2001, the reality of bioterrorism raised the infectious disease threat to the level of a high-priority security imperative worthy of attention in defence and intelligence circles. In so doing, it focused attention on several features of the infectious disease situation that make outbreaks - whatever their cause - an especially ominous threat. These include silent incubation periods that allow microbes to cross borders undetected and undeterred, the speed of spread made possible by the volume of air travel, and the potential for public panic amplified by instantaneous electronic communications. S A R S has now demonstrated these consequences in a dramatic way. It is the first new disease to spread along the routes of international air travel, placing any country with an international airport at risk. It demonstrates vividly the new reality of the infectious disease threat: an outbreak anywhere in the world places all countries everywhere at risk. SARS has also shown how, in a closely interconnected and interdependent world, a new and poorly understood disease can adversely affect economic growth, trade, tourism, business and industrial performance, and social stability as well as public health. S A R S is a serious public health concern, and in addition, can be perceived as a threat to national and international security due to the great social disruption it has caused. Fear of the disease has led to the closing of hospitals, schools, and borders, to travel restrictions, riots, mass population movements from affected cities, and unwarranted discrimination. In China, high-ranking government officials have lost their jobs. In all affected areas and neighbouring countries, the high economic costs of SARS are another potentially destabilizing force. This potential will be greatly aggravated should S A R S establish roots in a developing country with a poor health infrastructure. As a highly publicized, visible, and greatly feared disease, S A R S has stimulated an emergency response on a scale that has very likely changed public and political perceptions of the risk posed by all emerging and re-emerging infectious diseases. At the same time, S A R S has shown how the containment of any emerging infectious disease depends on unprecedented solidarity of the international community. Given the universal nature of the risk, and the high price to be paid, such solidarity is also in the enlightened self-interest of each individual nation.
PROFESSIONALRESPONSIBILITIES OF BIOMEDICAL SCIENTISTS IN PUBLIC DISCOURSE Scientists should be aware of the social harm that can result from the premature proclamation of claims that are weakly founded. Scientists must be particularly careful when their science deals with questions of human import. They have entered the political arena. Jon Beckwith, Making Genes - Makmg Waves UDO SCHUKLENK Division of Bioethics, Faculty of Health Sciences University of the Witwatersrand, Johannesburg, South Africa ABSTRACT This article describes how a small but vocal group of biomedical scientists propagates the views that either HIV is not the cause of AIDS, or that it does not exist at all. When these views were rejected by mainstream science, this group took its views and arguments into the public domain, including actively campaigning in newspapers, radio and television programmes for its views. I describe some of the harmful consequences of their activities, and ask two distinct ethical questions: what moral obligations do such minority-view type scientists have with regard to a scientifically untrained lay-audience, and what moral obligations mainstream newspapers and government politicians have when it comes to such views. The latter question will be asked because the ‘dissidents’ succeeded, for a number of years, in convincing the South African government of the soundness of their views. The consequences of their stance affected millions of HIV infected South Africans severely. INTRODUCTION Ever since the retrovirus HIV was declared to be the cause of AIDS, a small, but vocal group of scientists has argued in professional journals and publicly either that HIV is not the cause of AIDS, or that there is no evidence that HIV exists at all. The selfdeclared ‘HIV-dissidents’ blame other putative causes for AIDS, including the health consequences of highly active sex lives involving drug taking and multiple partners in developed countries, and/or poverty in developing countries.‘ They allege that essential AIDS drugs are one of the real causes of AIDS. A corollary of this position has been the view that AIDS is not infectious at all. Jurgen Habermas’ insights into our ‘erkenntnisleitendes Interesse’ encourage me to come clean at this stage and declare that I was one of those vocal HIV-dissident academics. Some years ago I changed sides in this dispute and accepted that mainstream views of HIV and AIDS are correct. Since I changed my views on this matter, I also happen to have changed my employer. I moved from developed-world, Australia, with its rather small number of people with AIDS, to developing-world South Africa, reportedly the country with the largest number of AIDS cases worldwide. Prevalence rates in the country among persons aged 15-49 years old are around 15%.2 Perhaps surprisingly for the uninitiated observer, the South African government’s publicly expressed views on HIV and AIDS, and its policies with regard to the provision of essential AIDS drugs have for a
196
197
number of years mimicked, in important ways, the views of HIV-dissidents. There is some evidence that the government stance on HIV/AIDS has moved closer to mainstream views in 2002, but the question of whether this will translate into realistic HIV/AIDS policies remains unanswered at the time of writing. The publicly expressed views of the South Afiican President, Mr Thabo Mbeki, and his health minister, medical doctor Manto Tshabalala Msimang, are very strongly influenced by the views of HIV dissidents. In this article, I describe in some detail the inner workings of the HIV dissident group, its impact on high-risk groups in developed countries, as well as its impact on South African government policies. My attention will then turn to two interesting ethical questions that arise in this context. The first question is to do with the issue of what responsibilities biomedical professionals have towards a scientifically unqualified public. In the case under consideration, HIV-dissidents took their initially scientific dispute out of the arena of biomedical journals, with their standard processes of anonymous peer review, into the public domain, including TV programmes, gay magazines, daily newspapers and similar publications. Indeed, their Internet based offerings3 persuaded South African President Thabo Mbeki to take their views seriously. The consequences for the provision of essential AIDS drugs to those HIV infected among the country’s impoverished masses were grave. The question I should like to pose is this: if you are a biomedical scientist who fails to convince your peers of your views on a particular matter of legitimate scientific inquiry, is it acceptable that you take your minority views ‘to the streets’ in order to drum up public and media support for your stance? I will also examine whether one can legitimately blame the proponent of such a minority position for decisions made by members of the public who decide to act on such a minority view. The second ethical question is to do with the ethical responsibilities of leading politicians and government officials towards their sovereign, the citizens of the country. I shall argue that the South African government has, over many years, neglected its moral obligations towards HIV infected individuals and people with AIDS, while pursuing AIDS policies strongly influenced by dissident views. THE DISSIDENTS Much of the dissidents’ claim to fame and public recognition is based on original work and analysis undertaken by a German-American, University of California at Berkeley based biochemist Peter Duesberg. In 1987, he published a major review article in the journal Cancer Resear~h.~ This article evaluated the available evidence concerning the pathogenicity of retroviruses. There soon followed another major review article in the US Proceedings of the National Academy of science^.^ Duesberg, a highly decorated and elected member of the Academy, effectively concluded that there is no evidence that HIV causes AIDS. A few years later he complemented his stance on HIV with his own hypothesis as to what the causes of AIDS are.6 His views were quickly taken on board by some gay activists in the USA, including the late Michael Callen, a founding member of the People with AIDS Coalition in New York City, John Lauritsen, a writer with the now defunct weekly N e w York Native, as well as many others. I started writing articles in a Berlin based monthly AIDS magazine called vor-sicht (perhaps best translated as be carefur or have foresight). All these writings occurred quite early on in the epidemic. No
198
life-extending AIDS medication existed at the time. People died in ever-growing numbers of one or another of the opportunistic infections now captured in the definition of AIDS. It certainly seemed prudent to me at the time to give support to critics of a hypothesis, the pursuit of which had done little to bring those in need closer to life-extending or lifesaving AIDS drugs. Very early on there were already obvious flaws in Duesberg’s publicly expressed opinion on HIV and AIDS. He stated categorically that HIV is not the cause of AIDS: while all he could have reasonably claimed on the basis of his analysis was that it had not been proven that HIV causes AIDS. Then there existed epistemological problems, such as the never resolved dispute between Duesberg and mainstream scientists over what constitutes proof of causation in the biomedical sciences. Still, Duesberg is no quack, and such was his personality that he actively sought the limelight, courting journalists as much as they courted him. A sharp and witty character, he quickly succeeded in gathering support. A group was established under the banner of ‘Rethinking AIDS’.’ This group consists of signatories under a petition demanding a thorough reappraisal of the HIV-AIDS hypothesis. It is worth having a closer look at the members of this group. Among its more influential members are a number of former Duesberg colleagues at the University of California at Berkeley, including Harvey Bialy, formerly Editor at Large of NutureBiotechnoZogy, and Kary Mullis, an iconoclastic Nobel laureate who was awarded the prize for his discovery of the polymerase chain reaction method. Other characters include Berkeley law professor Phil Johnson, best known for his creationist writings, assorted gay activists, and biomedical scientists many of whom have their own pet theory as to what causes AIDS. The group regularly points to a substantial number of scientists supportive of its agenda, to re-evaluate the HIV/AIDS hypothesis. Some of those members still listed are people who have been dead for a number of years. While it is correct that these people supported the objective of a scientific re-evaluation of the HIVIAIDS link when they were alive, it is clearly difficult to ascertain what these people would have made of the scientific developments and the accumulation of evidence for HIV as the crucial causative agent in AIDS, which has occurred in the years after their deaths. My own conversion to the mainstream view took place roughly when dissident predictions about AIDS deaths resulting from what they believed were highly poisonous drug cocktails (i.e. triple therapy) did not occur, and that actually the opposite became true. I saw people with AIDS, following years of decline, turn the comer after they began using these drugs. Dissidents would dispute this by various means, including claiming scientific fraud. Having seen with my own eyes the decline of close friends stop and revert, I was no longer an HIV dissident, for all practical intents and purposes. This is not to deny that there remain important open questions, including some very fundamental ones such as: by what exact direct or indirect mechanisms does HIV causes AIDS? And why is the epidemiology of AIDS so dramatically different on the African continent than it is in the USA, Europe, and most Southeast Asian countries? A second group of dissidents is led by Eleni Papadopulos-Eleopulos, a medical physicist based at Royal Perth Hospital in Australia. This group claims that HIV has never been fully isolated and seems to imply that it has not been proven that HIV does exist as a distinct entity at all.’ Duesberg certainly concurs that HIV does exist, but he believes it is a harmless passenger virus as opposed to the causative agent in AIDS.
199
My contributions as a bioethicist were mostly of a critical nature. As it happened, some of the group’s predictions turned out to be true. For instance, during the years when the developed world was gripped by dire predictions about gigantic HIV/AIDS waves, tips of the iceberg metaphors and so forth, public health promotion campaigns designed to threaten those countries’ citizens with death if they refused to take proper precautions, were launched. This was a classic example of the health-belief model of health promotion campaigns in action. The dissidents claimed that there was nothing remotely resembling such an epidemic anywhere in the developed world. They predicted furthermore, that AIDS would remain restricted to the same high risk groups it was affecting at the time, namely men who have sex with men and IV drug users. I subscribed to those predictions and argued in several publications that it is ethically problematic to scare a whole population indiscriminately into changing its sexual behaviour based on questionable empirical evidence and predictions.” Most bioethicists’ writin s on AIDS in the late 1980s and . . . the uncritical adoption early 1990s were based on those types of predictions. 18 I cnticised of those predictions. While history has proven my analysis to be correct, it is worth noting that without my own ultimately flawed views on the HIV/AIDS connection, it would not have occurred to me to take the stance I took at the time. Alert peer reviewers of professional bioethics journals prodded me into establishing my conclusions without relying on the truth of the dissident position. Thanks to them, I do not find myself in a situation where I would have to withdraw anything I actually published in professional journals on AIDS - another outcome of the so often maligned process of peer review. Arguably, many of the publications that celebrated the ‘ethical issues’ of a non-existent heterosexual AIDS epidemic in developed countries did a disservice to the profession and to the public. HIV-DISSIDENTS’ HIGH PUBLIC PROFILE Dissident scientists and their lay support base did not limit their activities to legitimately questioning the mainstream consensus in scientific journals. The most obvious way of testing scientific hypotheses and theories is to present counter evidence and counter arguments. However, the dissidents decided also to take their views to a broader audience. Duesberg’s views featured prominently in feature-length TV programmes in most developed English-speaking countries, including the USA, UK, Canada and Australia. The Sunday Times in London ran several-month long campaigns designed to make a mockery of the mainstream consensus on HIV and AIDS. I am not entirely innocent with regard to content provided in Berlin’s magazine vor-sicht, at the time a widely read monthly non-peer-reviewed AIDS magazine, targeting primarily gay men in major metropolitan areas of Germany. Magazines such as Continuum in the UK targeted HIV infected individuals specifically, with issue after issue questioning the link between HIV and AIDS. Its pages are littered with advertisements for alternative concoctions such as homeopathy, complementary medicine, and other more or less absurd approaches to AIDS disease management. Scientists among the dissidents used this non-peer-reviewed magazine to thrash out arguments about content and personality in public. When eventually two of the magazine’s founders died of AIDS related illnesses, the dissident denial machine went into overdrive, speculating about the men’s former drug-taking habits, their number of sex partners, and pretty much any ‘cause’ other than HIV.
200 Part of the dissidents’ appeal was their claim that the ‘establishment’ was censoring their views. Evidently, nothing could have been further from the truth. The pros and cons of dissident arguments were evaluated with their active participation in mainstream journals, including Science, Cancer Research, Proceedings of the National Academy of Sciences, AIDS Forschung, Genetica, and many others. In other words, while it may be true that exchanges between mainstream scientists and the dissidents have been acrimonious and at times emotional, there is little evidence that this impacted negatively on mainstream scientists acting as professional peer reviewers for leading medical journals, or else the dissidents’ paper would never have seen their work in print. The group’s initial reason for existence, a public appeal for a scientific re-evaluation of the HN-AIDS hypothesis, was published in Science.I3 Scientists among the HIV dissidents used their academic credentials and academic affiliations in order to generate interest, sympathy, and allegiances in lay audiences. They were not professionally troubled about recruiting lay people, who were clearly unable to evaluate the scientific validity or otherwise of their views, to their cause. I shall return to this theme in a moment. CONSEQUENCES OF HN-DISSIDENTS’ PUBLIC CAMPAIGNING IN THE DEVELOPED WORLD Inevitably, millions of people subjected to dissident views in TV programmes and their favourite broadsheets, would respond in a variety of different ways. To my knowledge, no scientific surveys were undertaken to analyse what impact such views had on people belonging to high-risk groups in developed countries. The anecdotal evidence I gathered from discussions on public and private emailing lists, subscribed to by people with HIV/AIDS, is that indeed (some) HIV positive people stopped practising safer sex, rehsed to inform their sexual partners about their HIV status, and also changed their treatment regimes. I vividly recall arguing on such a private mailing list with an HIV infected gay man about the issue of safer sex. He argued that there is little reason to use condoms because AIDS clearly is not infectious (as HIV dissident Professor Duesberg had shown), and that therefore he would not allow the ‘establishment’ to take the fun out of his sex life. Members of the dissident group actively encouraged his stance at the time. On an infected individual’s website, several years ago, I was thanked for providing arguments critical of zidovudine mono-therapy (the early first-line therapeutic response to AIDS targeting the causative agent directly). The individual strongly believed that he owed his life to my publications. Based on arguments I provided, he decided not to follow his doctor’s advice to begin taking zidovudine. It turned out that my scepticism about a ‘hit HIV early, hit it hard’ with zidovudine mono-therapy treatment strategy was confirmed years later by more sophisticated clinical research. Again, coincidentally, and luckily for me, my dissident convictions led to the publication of arguments and conclusions that history confirmed as correct. - My own anecdotal evidence is supported by similar reports gathered in an article by the San Francisco based AIDS F0undati0n.l~ What is important, however, is that such publications as well as the high profile activities of dissident scientists such as Duesberg influenced a lay public’s decisions with regard to treatment regimes and HIV protective behaviours. I think that while one can legitimately question whether scientists should engage in such public campaigning and posturing at a time when their argument is considered lost
201
by their peers, there is no good reason to deny such professionals the opportunity to have their say publicly if they so wish, provided at least one condition is met: those infected individuals who make choices such as those described must be aware of the fact that the views expressed by dissidents are those of a very small minority of scientists. They must fully understand that the odds of these views being correct are miniscule at best. This does not contradict my earlier claim that lay people are unable to understand or evaluate the validity or otherwise of dissident views. My point is that on a more basic level they must understand how exceedingly small the number of professionals is who hold dissident views. In that sense, their decision to adopt the dissident stance is very much an autonomous choice.’’ It is a choice that is authentically their own. It is a considered choice that results in such individuals rejecting the advice of mainstream health promotional programmes. It is a considered choice to reject mainstream physicians’ advice. Arguably, therefore, the responsibility for their decisions, actions and the consequences of those decisions and actions, is largely their own.16 THE SOUTH AFRICAN CONTEXT A number of important empirical facts change the moral evaluation of dissident activities when looked at from a developing country, and in particular a Southern African context. Unlike in the developed world, the vast majority of people in developing countries are neither able to evaluate the validity of dissident claims, nor do they understand how small the number of professionals who support such views actually is. - I mentioned some salient facts about the scope of the epidemic in South Africa in my introductory remarks. Let me add a few pieces of additional information: in the year 2000, between 4.7 and 6 million of 40 million South Africans were reported to be HIV i n f e ~ t e d .A ’ ~ 1999 study of HIV infections among women seeking assistance in the country’s antenatal clinics registered prevalence rates between 5.2% and 32.2%, depending on the province in question. Some African governments, such as that of Uganda, have responded with mass education campaigns and attempts to increase access to essential AIDS drugs to as many of their infected citizens as possible. Not so the South African government. The main reason is that the country’s president, Mbeki, has lent his support to HIV dissident views. He has campaigned both publicly and behind the scenes to justify his government’s policy of not providing essential AIDS drugs to the majority of South Africa’s infected citizens. While Mbeki has prevaricated on this issue, the subsequent quotes from speeches he gave suggest strongly that HIV dissident views are at the core of his government’s response to the AIDS crisis in the country: Thus it happens that others who consider themselves to be our leaders take to the streets carrying their placards, to demand that because we [black people] are germ carriers, and human beings of a lower order that cannot subject its [sic] passion to reason, we must perforce adopt strange opinions [such as mainstream views on HIV and AIDS], to save the depraved and diseased people from perishing from self-inflicted disease. ... Convinced that we are but natural-born, promiscuous carriers of germs, unique in the world, they proclaim that our continent is doomed to an inevitable mortal end because of our unconquerable devotion to the sin of 1ust.18
202 The director of the country’s HIV/AIDS Law Project, Mark Heywood commented that these views “appear to describe those who believe AIDS is a virologically caused, mostly sexually transmitted disease that can be medically contained, as stigmatising and demeaning black people.”‘’ The ANC’s national leadership regurgitated these views in a lengthy document, taking a strongly HIV-dissident stance: For their part, the Africans believe this story, as told by their friends. They too shout the message thdt - yes, indeed, we are as you say we are! Y e s we are sex-crazy! Y e s we are diseased! Yes, we spread the deadly HIVirus through our uncontrolled heterosexual sex! In this regard, yes we are different from the US and Western Europe! Yes, we, the men, abuse women and the girl-child with gay abandon! Yes, among us rape is endemic because of our culture! Yes, we do believe that sleeping with young virgins will cure us of Aids! Yes, as a result of all this, we are threatened with destruction by the HIV/Aids epidemic! Yes, what we need, and cannot afford because we are poor, are condoms and anti-retroviral drugs! Help! ... Scare mongering ... is condemning millions of our own people to ill-health, disability and death because of a refusal to recognise the critical importance of the diseases of poverty and other illnesses that afflict our people, including STDs. This is done to sustain a massive political-commercial campaign to promote anti-retroviral drugs. ... Strange as it may seem, given what our friends tell us about the Virus everyday, nobody has seen it, including our friends. Nobody knows what it looks like.
South African academic Mandisa Mbali argues, quite convincingly, that while modem AIDS activism is concerned about access to treatment, Mbeki as a post-colonial African leader is fighting a different political battle, trying to assert an ‘African Renaissance‘ in response to racism, apartheid and colonialism.20He also went on the record in Time magazine stating, “YOUcannot attribute immune deficiency exclusively to a vi~ ~ 1 s .In ” ~parliament ‘ question time, he asked, “How can a virus cause a syndrome?”22 These views are mirrored in statements made by his health minister, which slam mainstream scientists and pharmaceutical companies for advocating anti-retroviral drugs “because they have a vested interest in doing There have been some, albeit inconclusive, indications that the author of the ANC document was Mbeki himself, because the embedded electronic signature of the document traces it back to: ‘Author: Thabo Mbeki’, and ‘Com any: Office of the President’, as an investigative report in a local newspaper revealed.2 B What these quotes suggest, is that Western dissident views have seemingly persuaded the powerful president of the developing country with the world-wide largest reported number of AIDS cases, to support their take on HIV and AIDS. The former president of the country’s Medical Research Council called Mbeki’s activities and views on this matter a “national ~candal.”’~ Mbeki established a presidential advisory panel stacked with dissidents and accompanied by some mainstream scientists, at great financial cost to the country. The panel’s deliberations led nowhere. Its establishment betrayed a gross misunderstanding of the process of scientific inquiry. Mbeki hoped to facilitate some sort of consensus or compromise between dissidents and mainstream scientists, as if the question of scientific truth was a matter of democratic consensus finding. What is of greater interest for our purposes, however, is how the dissidents used their elevated status (that is, of someone appointed to the presidential advisory panel) to influence public discussions in South Africa. Mbeki, who during 2002 apparently had another conversion and seemed more prepared to accept mainstream views on AIDS, reportedly instructed his health ministry to write to the dissidents and request that they refrain from signing their public statements as members of his advisory This is one
203 of many instances where dissidents use their affiliations (in this case, their membership in Mbeki’s expert panel) to boost their credibility. A few characteristic quotes that dominated the print media for months are as follows. Professor Sam Mhlongo, head of family medicine at the Medical University of South Africa is a member of the dissident group. Interestingly, at the time of writing he had not published a single peer-reviewed original paper based on empirical research on AIDS.27His main line of attack, unsurprisingly, is that Nevirapene, a drug proven to work, and recommended by both the WHO as well as UNAIDS because it reduces the likelihood of the mother-to-child-transmission of HIV from an infected pregnant woman, “is a notoriously very toxic drug.” He insists that HIV has “never in the history of the AIDS era been isolated.”28Dissident presidential panellist David Rasnick, from the USA, declared in interviews with local newspapers that “Africans are suffering and dying from the same things they have been suffering and dying from for generations before AIDS. They are not suffering and dying from something new called AIDS.” He is also reportedly convinced that “AIDS was neither contagious, sexually transmitted or caused by HIV and that anti-AIDS drugs accelerated death or made people sick with AIDS.” Utilising the sensitivities of South Africans with regard to the issue of apartheid, he claims that “South Africans were now being ruled by the ‘tyranny’ of orthodox science in the same way as the country’s white minority leaders under apartheid.”29Rasnick stressed his membership in Mbeki’s expert panel in his letters to editors of local newspapers. ETHICAL ISSUES Central to my discussion of ethical concerns is a statement made by Art Amman, head of Global Strategies for HIV Prevention. He said, “after reviewing the volumes of communication having to do with Duesberg disciples, personally listening in court for two days to these individuals, and surveying the damage they are invoking, I am trying to reach some conclusions and think about a rational approach to limiting their future damage and infl~ence.”~’ There are various ethical issues that should be raised with regard to the dissidents’ activities in South Africa, as well as the country’s president’s obvious failing to engage in a process of due diligence, before he decided on the country’s response to what is probably the most serious health disaster in South Africa’s history. It seems useful to begin by depicting how democratic developed societies responded to dissident views: dissidents were allowed to have their say in professional journals, and, as we have seen, in the mass media. The latter is a questionable strategy for a professional to pursue academic grievances, but clearly democratic liberal values at the heart of these societies permitted the dissidents’ publicity-seeking activities to go ahead. What were the ethical reasons for this? Undoubtedly, they had much to do with the attitudes so eloquently stated in John Stuart Mill’s treatise On Liberty: “if all mankind minus one were of one opinion, mankind would not be more justified in silencin that one person, than he, if he had the power would be justified in silencing mankind.”gFWhat were Mill’s arguments in support of this view?
204 We have now recognised the necessity to the mental well-being of mankind of freedom of opinion, and freedom of the expression of opinion on four distinct grounds (.. .): first, if any opinion is compelled to silence, that opinion may, for all we can certainly know, be true. To deny this is to assume our own infallibility. Secondly, though the silenced opinion may be an error, it may and very commonly does, contain a portion of truth (...). Thirdly, even if the perceived opinion be not only true, but the whole truth; unless it is suffered to be, and actually is, vigorously and earnestly contested, it will, by most of those who receive it, be held in the manner of a prejudice, with little comprehension or feeling of its rational grounds. (. ..) Fourthly, the meaning of the doctrine itself will be in danger of being lost, or enfeebled, and deprived of its vital effect on the character and conduct: the dogma becoming a mere formal profession, inefficacious for good, but cumbering the ground, and preventing the growth of any real and heartfelt conviction, from reason or personal e x p e r i e n ~ e . ~ ~
While these are powerful reasons to legally permit scientists to espouse minority views, there are good ethical reasons for why they ought to voluntarily refrain from campaigning publicly for their views. These reasons have to do with their moral obligations to the public, as professionals. Before 1 get to those reasons, however, let us have a closer look at Mill’s arguments. He is effectively stating that we should allow freedom of expression because otherwise we would run the risk of suppressing a view that could have been the correct one, or it could have contained a kernel of truth. In addition, those holding the mainstream view could quite possibly hold it without critical reflection on it, and without knowing why they are holding it. In fact, the mainstream paradigm may merely become a formality repeated by us without good reason. While all these arguments, to my mind, tip the scales in favour of freedom of academic expression in professional journals, according to the standard rules of academic debate, they are too weak to serve as a justification for the type of campaigning described in this article. After all, whether or not the dissident view is erroneous (Mill’s first point) will never be settled in the letters’ pages of daily newspapers in South Africa, or on talk-back radio shows in the US or on TV programmes in Canada or Australia. Even looked at from a dissident perspective, it seems obvious that the objective of ‘winning the day’ cannot be achieved by winning over the lay-public. As to Mill’s last point, I am happy to concede that I trust mainstream science’s findings sufficiently to accept them as prima facie correct. It is unreasonable to expect anything other of me. The odds are highly stacked against minority views of the sort that concentration camps did not exist in Nazi Germany (as proposed frequently by holocaust deniers), that high concentrations of vitamin C kill cancer, or indeed that your traditional stage-healing of a quadriplegic by a Reverend belonging to a charismatic church ever took place. PROFESSIONALISM AND THE PUBLIC GOOD Most professions are characterised, among other things, by the fact that their members pledge, usually publicly during graduation ceremonies, to abide by codes of ethics. Invariably these codes of ethics contain pledges promising to serve the interests of clients or patients and the public good.33334 Indeed, even most modem codes of professional ethics reflect the religious derivation of the term professionalism, meaning to profess publicly to serve the public good. I think that these professional ethical obligations toward the public may well serve as a factor limiting professionals’ claims to freedom of (public) speech. In the case under consideration, the argument in support of this view could be sketched like this: scientists among the HIV dissidents know that among their peers the
205
number of people sharing their views is minuscule at best. While this is not proof of the wrongness of their views, it provides the dissenters with strong professional ethical reasons to refrain from running high-profile public campaigns designed to sway a lay-public after their failure to convince the professional public. While there are examples in history where minority views in scientific matters turned out to be correct, dissidents have good historical reasons to acknowledge that this is not usually the case. If dissident views had indeed been systematically censored by a mainstream HIV science conspiracy, arguably the dissidents would have been entitled to alert the public to this fact and aim to have the professional process of anonymous, unbiased peer review re-established. However, it is important to note that this has not been the case. Dissidents had plenty of opportunities to argue their case in professional journals. Afier a process of due diligence, health authorities in most countries have begun health promotion campaigns designed to inform their citizens about the risks of unsafe sex, and the benefits of HIV testing, given the availability of life-extending (if not life-saving) treatments. The public interest is not served by dissidents' attempts to convince the lay-public that their views are correct and that AIDS is not contagious, HIV is not the cause of AIDS, or indeed that HIV does not exist at all. For developed countries there is some anecdotal evidence that members of high-risk groups have taken dissidents' views sufficiently seriously to change their AIDS related risk assessments, including stopping the use of condoms to prevent an infection. While the responsibility for such choices should be placed mostly on the shoulders of those individuals who made them, clearly HIV dissidents are not entirely innocent either. Without their publicity seeking behaviours, it is unlikely that many lay-people would have taken notice. Turning to South Africa, there is more than anecdotal evidence to suggest that the dissident activities had negative health consequences. The following excerpt was published in a local broadsheet. Nkululeko Nxesi, the national director of the National Association of People Living with HIV and AIDS, said his organisation was experiencing rejection of the message that HIV caused AIDS "on a daily basis". "At the end of the day, I think people will want to hear the news that HIV does not cause AIDS," he said. "It is affecting what we're doing. It's not even two steps back, it's like 10 steps back. And it was such a struggle to encourage people to engage in safer sex in the first place." AIDS Consortium director Morna Comell said that, while she did not have first-hand experience of rejection of the AIDS message, she had heard anecdotal reports of confusion. Recently a friend had told her how she had been involved in an educational discussion on HIV in KwaZulu-Natal, where people had asked why they needed to use condoms if HIV did not cause AIDS."
Winstone Zulu, an AIDS patient, recounts the undesirable consequences of the president's support for the dissident stance on AIDS. Zulu initially took antiretrovirals to maintain his health. After stopping the use of antiretroviral medications, his health deteriorated and he eventually went back on life-saving essential AIDS drugs. Zulu explains how Mbeki's tacit support for dissident views convinced him that they were right. "With Mbeki coming in, I believed it was the link I needed to believe this," he explains. "When I saw that Mbeki was doubting, I thought this was A 2002 representative HIV prevalence study, undertaken by the country's Human Sciences Research Council, reveals that condom use and behavioural changes toward safer sex are lower amongst people who either believe HIV is not the cause of AIDS or who admit to be uncertain about the viruses role in the causation of AIDS, when com-
206 pared to people who believe HIV is the cause of AIDS. The authors of this study conclude, “correct, unequivocal knowledge that HIV causes AIDS ... is strongly associated with self-reported behaviour change over the past few years as a response to the risk of HIV infection, condom use in the last sexual experience and discussion of HIV prevention with a partner.’737About one in five respondents reported doubts about the question of whether HIV causes AIDS. My argument thus far suggests that minority-view-holding professionals in the biomedical sciences have particular professional ethical obligations to refrain from campaigning publicly among lay audiences, for support for their professional views. These reasons have to do with the idea that professionals ought to serve the public good. The public good is not served by scientists whose views have been rejected by their peers, and who are trying to ‘win’ the scientifically lost case in the lay-public’s domain. It also seems professionally irresponsible to impress the ‘truth’ of one’s views on a lay-audience while knowing full well that this audience is not equipped to evaluate the scientific merits or otherwise of one’s arguments. - At the same time, of course, nothing should prevent professionals holding minority views in their field of expertise to make their case in professional journals, provided standard procedures of anonymous peer review have been followed. This is also in the public’s best interest, because it constitutes a sound procedure for testing and (re)-evaluating scientific hypotheses and theories. What counter arguments could be advanced against the position developed so far? Some might be worried that my proposal would lead us down a slippery slope toward general legally imposed restrictions of free speech placed upon professionals. There is little that can be said with regard to this argument, other than that there is no reason to assume that a society would move easily from a voluntary self-restriction to outlawing such freedom of speech. However, even consequentialists, possibly persuaded by my argument that the dissidents caused unacceptable harm to the public good, might be womed about the question of whether it is possible to put my proposal into action on an operational level. After all, while most scientists tend not to look for the media limelight immediately, many professional journals alert the mass media to latest findings by way of distributing their table of contents via email to subscribing journalists. Inevitably, health journalists will contact the authors for media comments. How should scientists respond to such calls? Perhaps one way of answering this question is to acknowledge that minority views can be held on a range of issues. Some such views may affect only small numbers of people, or if acted upon, may result in harmful but reversible consequences. Under such circumstances, while the scientists still ought to restrain themselves in their interactions with the media, the consequences of their information-sharing would lead only to limited negative consequences. However, in cases where potentially large numbers of people might make problematic choices influenced by ‘dissident’ scientists, one would hope to see such professionals acknowledge and stress in their interactions with journalists, that the vast majority of their colleagues does not share their views and that there is a fair chance that they might have got this one wrong. Certainly, dissident scientists should not actively go about spreading their views in the mass media. This undoubtedly would have made a difference in the case I have described. The public good would have been better served if the protagonists had acted according to the lines suggested. - The troublesome issue of how scientists should respond to the mass media’s exploitation of their work is, of course, not
207 limited just to the issue of minority-view scientists. Jon Beckwith demonstrates this nicely in his discussion of geneticists’ work on a putative ‘criminal chromosome’ and on the general public’s misconceptions generated by the mass media’s reports on published research.38 POLITICIANS AND THE PUBLIC GOOD The consequences of dissident scientists’ public campaigns were particularly grave in South Africa. The country’s president, after stumbling across dissident views on one of his travails on the Internet, seems to have made their views more or less his own. The consequences were disastrous by any interpretation of what the public good would have required the govemment of the country, with the largest reported number of HIV infections and AIDS cases, to do. The South African government did everything possible to delay the provision of essential AIDS drugs to the impoverished masses, relying on public sector healthcare delivery. Initially, the government insisted that essential AIDS drugs were too expensive. When prices came down the true dissident colours of the government became more obvious. It insisted, among other things, that AIDS drugs were poisoning South African Blacks39,that the drugs were not proven to be effective4’, and that it is doubtful that viruses cause disease syndromes. These views were mirrored by at least one provincial ANC health minister, who reportedly fired a hospital director in Mpumalanga province for providing a hospital room to a non-governmental organisation supporting rape survivors with counselling, clean clothes, and post exposure medication. The politician justified her decision with this statement: “The health and lives of our poor black people were placed under serious threat by this organisation which claimed to have their interests at heart.’I4’ In court cases brought about by treatment access activist groups, the government went so far as to misrepresent a scientist’s research findings in order to prevent a courtfinding requiring it to provide medication designed to drastically reduce the mother-tochild transmission of HIV. The author of the findings went out of his way to inform the court in an affidavit that the government’s summary of his work and the conclusions drawn from it do not reflect his views.42 The argument supporting my contention that the South African government failed its moral obligations towards the citizens of the country is easy to make, and similar to the case brought against the HIV dissidents. While it is true that politicians are not professionals and therefore have no professional ethical obligations toward the public, they have other ethical obligations stemming from their role as elected representatives of the people. Democratically elected governments are morally obliged to serve the needs and interests of their sovereign, that is, the citizens of the country.43The South African government clearly failed on this count. THE ROLE OF THE MEDIA South African news media behaved responsibly in this affair, by and large. The activities of the presidential advisory panel nonetheless dominated the newspapers for some time. However, with the exception of the letters’ pages of the Johannesburg daily The Star, the dissidents were provided with comparably little space to propagate their views.
208 Indeed, Mbeki and his government were subjected to a continuous barrage of scathing criticism. The Star provided a great deal of space for a public spat between a local academic and a US based dissident. The dissident was challenged to have himself injected with HIV if he was so convinced HIV is not causing the immunodeficiency. The publicity-seeking HIV dissidents succeeded yet again in getting into the world media limelight, because the challenge was reported widely, even in newspapers in as far away places as Singapore and Australia. Arguably, if the country,’s president had not lent his support to the dissident cause by way of inviting many dissidents on his advisory panel on AIDS, the US based scientists involved in the challenge would probably not have received any publicity. The letters editor of that same newspaper influenced the public debate by publishing over months, letters written by dissidents. This misled the reading public into believing that there is a major debate taking place, while really the local dissidents mobilised their few supporters in various countries. The question is whether daily newspapers with large scientifically illiterate audiences, have a responsibility to their readership not to publish scientifically internationally discredited views. It seems clear that the letters editor of the paper in question has failed basic rules of journalism ethics in this regard. THE ROLE OF THE MEDICAL PROFESSION Various responses to the challenges that the HIV dissidents and its high-placed supporters in South Africa’s national and provincial governments posed, emanated f?om the organised medical profession. The South African Medical Association intervened at the height of presidential denialism with a consensus statement, reading in parts44: Whilst SAMA welcomes any debate on health it is obliged to point out that the view that HIV may not cause AIDS has been thoroughly discredited by several recent scientific studies. This view is dangerous and its propagation may lead to cases of AIDS that may have otherwise been prevented.
The country’s two leading health sciences faculties of the universities of the Witwatersrand and Cape Town, issued public statements criticising the govemment for its flawed response to the AIDS crisis and its prevarications on the issue of what causes AIDS. The chairman of the South African Medical Association harshly criticised the government in a speech and press release, and went so far as to accuse it of committing genocide by acts of omission (to provide access to essential AIDS drugs to poor South Africans). Bioethicists in the country acted responsibly. Some of the leading bioethicists actively criticised the government’s stance, even organising a high-profile petition and press release demanding the re-instatement of the hospital director who was fired in Mpumalanga province for offering facilities to the rape survivor group. Some doctors in public sector hospitals worked hard to find ways to subvert govemment policies and regulations, and provide as many needy patients as is possible with access to essential AIDS drugs. The Medical Research Council published a report debunking Mr. Mbeki’s public questioning of AIDS statistics, declaring that AIDS is the number one cause of death in the country. Its president subsequently resigned, only to describe to a journalist from The Economist how he and other scientists were subjected to govemment threats and bullying. The magazine reported: “The scientist accuses the minister of threatening that he will be fired and ‘forgotten by history’ for opposing the government policy and statements on AIDS.” The scientist comments that the minister ‘‘is trying to overrule sci-
209
ence with politics. It is very frightening.”45Jonathan Glover in his book Humanity - A Moral History of the 2dhCentury provides an excellent analysis of the psychological underpinnings of such government activities.46 It is fair to say that the South African organised medical profession’s response to the dissident saga, unlike its response to challenges such as the Biko death some 25 years ago4’, has been vigorous. In that sense, many public sector healthcare professionals’ responses to a typical dual-loyalties based conflict were settled not in favour of the government’s demands, but in favour of their patients. They took their professional codes of ethics and their public pledges to put their patients’ interests first seriously. CONCLUSION I have argued in this article that scientists holding minority views on particular issues affecting the general public should voluntarily refrain from campaigning for their views among the lay-public. The case described in some detail in this article demonstrates convincingly the great harm done by HIV dissident scientists to the well-being of people with HIV and AIDS all over South Africa. News media with large scientifically illiterate audiences have particular professional responsibilities toward their readers not to propagate such minority views. Governments should be particularly diligent in their evaluation of such minority scientists’ views and should be as a matter of principle very cautious in adopting such minority views when developing public policy.48 A possible way forward would seem to be the development of ethical guidelines similar to the World Medical Association’s Declaration of Helsinki by organisations such as the American Society for the Advancement of Science, the British Association for the Advancement of Science and others. Quite possible this could be a task for international umbrella organisations such as CIOMS to take on. These guidelines could develop frameworks for scientists interactions with the wider (lay-)public. If scientists digress, journalists, for instance, as well as politicians would have good reasons to be concerned about their activities and have good prima facie reasons to reject the advice rendered by such individuals. Science organisations could consider censoring publicly such publicity seeking scientists.
REFERENCES
’
Duesberg PH. 1992. AIDS: acquired by drug consumption and other non-contagious risk factors. Pharmacologv and Therapeutics 5520 1-277 Shisana 0, Simbayi L. 2002. South Afiican National HIVPrevalence, Behavioural Risks and Mass Media -Household Survey 2002. Human Sciences Research Council: Capetown. Accessed online at http://www.duesberg.com, http://www.virusmyth.com on November 27,2002. Duesberg PH. 1987. Retroviruses as carcinogens and pathogens: expectations and reality. Cancer Research 47:1199-1220 Duesberg PH. 1989. HIV and AIDS: Correlation but not Causation. Proceedings of the National Academy of Sciences 86755-764. Duesberg PH. 1992. Op.cit. Duesberg PH, Rasnick D. 1998. The AIDS Dilemma: drug diseases blamed on a passenger virus. Genetica 10485-132. Duesberg PH. 1988. HIV is not the Cause of AIDS. Science 241:514, 517. Indeed, one of the dissidents, physiologist Robert Root-Bemstein published a book under this title. RootBemstein R. 1993. Rethinking AIDS: the tragic cost ofpremature consensus. Free Press: New York.
’
210
Papadopulos-Eleopulos E, Turner VF, Papadimitrious JM. 1993. Is a positive Western Blot Proof of HIV Infection? Bio/Technology 11:696-707. 10 Schuklenk U, Mertz D, Richters J. 1995. The bioethics tabloids: How professional ethicists have fallen for the myth of tertiary transmitted heterosexual AIDS. Health Care Analysis 3:27-36. I ’ Eg. Overall C, Zion WP. 1991. Perspectives on AIDS. Ethical and Social Issues. Oxford UP: Ontario, Hayry H, Hayry M. 1987. AIDS now. Bioethics 1:339-356. Page B. 2003. The Murdoch Archipelago. Simon & Schuster: London. l 3 Baumann E, Bethell T, Bialy H, Duesberg PH, Farber C, Geshekter CL, Johnson PE, Maver RW, Schoch R, Stewart GT. 1995. AIDS Proposal: Group for the Scientific Reappraisal of the HIV/AIDS hypothesis Science 267 1080. 14 SF AIDS Foundation. HIV causes AIDS. Accessed online at http://www.sfaf.org/aboutsfa~outreach/il on November 15,2002 at 12:32pm. l 5 Feinberg J. 1986. The moral limits of the criminal law: harm to self. Oxford UP: New York. 16 Schuklenk U. 1998. Access to experimental drugs in terminal illness: ethical issues. Haworth: New York. Rensburg Dv, Friedman I, Ngwena C, Pelser A, Stein F, Booysen F, Adendorff E. 2002. Strengthening localgovernment and civic response to the HIVAIDS epidemic in South Africa. Centre for Health Systems Research and Development, University of the Free State, Bloemfontein. Forrest D, Streek B. Mbeki bumbles into another AIDS debate. Mail & Guardian October 28,2001 I9 Forrest D, Streek B. 0p.cit. 20 Mbali M. 2002. MbekiS Denialism and the Ghosts of Apartheid and Colonialismfor Post-apartheidpolicy making. httu://www.nu.ac.za/ccs/files/mbeki.Ddf Accessed on-line on June 02, 2003. 21 Karon T. ‘You Cannot Attribute Immune Deficiency Exclusively to a Virus’. Time September 11,2000. 22 South African Parliament Question Time. September 20,2000. 23 Staff Writer. Minister slams AIDS drug propaganda. Mail & Guardian November 08,2000. 24 Barrel1 H. Would the real AIDS dissident please reveal himself. Mail & Guardian April 19, 2002. 25 Sulcas A. Mbeki’s AIDS call alarms scientists. Sunday Independent March 18, 2000. 26 Underhill G. Mbeki’s AIDS torch shines on in cyberspace. Cape Argus August 15,2002 27 MedLine search undertaken on November 14,2002 28 Staff writer. Medunsa AIDS dissident ‘advises’ health minister. Mail & Guardian May 09,2002 29 Reuters news agency report. Africans aren’t dying of AIDS, dissident says. April 06,2001. Accessed online at http://www.iol.co.zdindex.php?set-id=l &click-id=l 3&artPid=qw9865716648 10B243 on November 14,2002, 14:25pm. 30 Sulcas A. 0p.cit. 3 1 Mill JS. 1859 (1960). On Liberty. JM Dent: London, 79. 32 Mill JS. Op.cit., 111-112. 33 Koehn D. 1994. The Ground ofProfessiona1 Ethics. Routledge: London. 34 Chadwick RF (ed). 1994. Ethics and the Professions. Ashgate: Aldershot. 35 Altenroxel L. AIDS debate may undermine youth sex habits. The Mercury September 24,2000. 36 Altenroxel L. Wish you were right says Mbeki’s AIDS man. The Star July 19,2002 37 Shisana 0, Simbayi L. 0p.cit. 38 Beckwith J. 2002. Making Genes -Muking Waves. Harvard University Press: Boston, chapter 8. 39 Farham B. Time for Manto to Resign. Mail & Guardian July 19, 2002. 40 Staff Writer. 0p.cit. 4’ Altenroxel L. Insubordinate doctor fights to keep working. The Star February 28, 2002 42 Prof Robin Wood’s second affidavit can be found at http://www.tac.org.za/Documents/MTCTCou~Case/af~davi~robinwoodreplyaf~davit.doc , Accessed on November 15,2002, 11:46am. 43 Singer P. 2002. One world: the ethics ofglobalization. Yale UP: New Haven. 44 SAMA. 2000. HIV causes AIDS. SAMJ 90:461. 45 Editorial. Leave them be: South African scientists deplore their government’s meddling. The Economist April 06,2002, p 77. 46 Glover J. 1999. Humanity - A Moral History of the 20‘* Century. Jonathan Cape: London. 47 McLean GR, Jenkins T. 2003. The Steve Biko Affair: A Case Study in Medical Ethics. Developing World Bioethics 3: 77-95.
21 1
48 I am grateful to my research assistant Vernon Naidoo for tracing relevant dissident quotes in the newsmedia, a$ well as Mandisa Mbali, Lynne Altenroxel, Thofi S. Bishop, Edwin Cameron, Romi Fuller, Bonnie Steinbock and Tim Trengove-Jones for helpful assistance with andor feed-back to earlier drafts of this article. Richard Ashcroft and an anonymous reviewer critically and diligently reviewed this paper foT the Journal of medical ethics. Both are thanked for triggering changes of (and hopefully improvements to) this paper.
This is a preprint of an article accepted for publication by the “Journal of Medical Ethics” and may not, save under the fair dealing of the provision of the Copyright Designs and Patents Act (1988), be reproduced without the consent of the BMJ Publishing Group.
ETHICS, JUSTICE AND STATISTICS
J. L. HUTTON Department of Statistics, University of Wanvick, U.K. All scientists work with a code of practice, though for many this is implicit rather than explicit. We discuss various ethical theories. The possibility of following a code of ethics, whether the official one, or an alternative code, is dependent on being able to obtain knowledge and understand the world. Professional knowledge has to be based on inferences from limited information. Ethics and epistemology are interdependent: how we decide and act depends on what we think we know, and what we seek to learn depends on what our ethics deem important. Studies which are of poor scientific quality are not ethical, as misuse of resources is universally discouraged. Statistics provide the optimal methods for designing studies and making inferences, and thus ethical professional conduct requires individual or collective understanding of some statistical theory and practice. Treatment of infectious disease requires not only knowledge of good strategies, but also infra-structure and resources to deliver such strategies. A particular concern is the quality of the design of trials and surveys. Considering whether the poverty of a country should influence the choice of treatments or the interpretation of ethical codes leads us to issues of justice and politics. Collectively owned, multi-professional work requires each of the various professions to take responsibility for the conduct of the research, and the impact that it might have. Statisticians share important responsibilities in maintaining ethical medical research in all countries. ETHICS The International Statistical Institute's Declaration of Professional Ethics (1985) recognises that statisticians collaborate with many other professions throughout the world. Therefore, it is important to be aware of the variations in cultural and ethical systems. I take 'ethics' to be the investigation of ideas of reasoning concerning concepts such as good, duty, responsibility, honour, choice, freedom, and the foundations and logical implications of these concepts. Applied ethics and morality, which address issues of actions in relation to human and other lives, environment and law, are the 'technology' of the 'science' of ethics. Four main ethical theories are commonly discussed by philosophers: deontological, consequential, utilitarian and virtue ethics. The relation between these theories and the various religions in not discussed. However, as we are concerned about disease, I expect the reader's religious understanding of suffering and the appropriate response to it to influence their interpretation. Aristotle addressed the nature of a good life, and declared that virtue is primary. The person who lives a virtuous life, who thinks and does what is good, is happy. Character is primary. Other concepts, such as duty, freedom and contentment are explicable in terms of virtue. Kant's imperative is to do what is right, to act in such a way that you would be willing to have others required to act in the same way. Duty is primary, hence this forms 'deontological ethics'. In contrast to ethics as a
212
213
reflection of the person acting, or as intrinsic to particular actions, consequentialism evaluates actions in terms of their consequences. Mill extended this focus on outcomes by using the language of science: actions which maximise utility, or the happiness of the most people, are the right or just actions. Statistical decision theory provides a calculus which combines the probabilities, or risks, of events with the values, whether benefits or harms, and indicates the best course of action. Autonomy is a very widely used expression in western discussion of ethics. In medical ethics, autonomy is a dominating concept. It is based on the assumption that the rights of an individual person must be the dominant concern in health, social and economic decisions. Autonomy is incoherent in many contexts, as a person's freedom to take advice and choose their actions is dependent on the resources and opportunities within which they live. If there is no doctor or nurse, a person cannot rely on medical advice about or treatment of cancer. The training of professionals requires co-operation of many institutions and appropriate use of taxes. The UK policy of recruiting nurses and teachers from developing countries, instead of addressing the reasons for withdrawal of UK citizens from these professions, affects the 'autonomy' of people in other countries. Without drugs, or without food, focusing concern on preserving a patient's autonomy is paternalism expressed as abdication of responsibility. Infectious diseases, especially highly contagious infectious diseases, require responses which are not driven by a concern for doctors' or patients' individual rights as quarantine and vaccination are essentially communal in action. In Toronto, a doctor who chose to ignore requests for staff exposed to SARS to quarantine themselves spread the virus to many other people, some of whom died. His action respected his autonomy, but can easily be demonstrated to be immoral within deontological, utilitarian and virtue ethics. To paraphrase Mill, 'your autonomy in hand-waving stops short of my nose.' Tolerance is a dominating concept in discussions of cultural and religious differences, but is only effective when there is already agreement on basic values. In the UK and France, female genital mutilation is illegal, and cultural groups are forbidden this practice. Religions that believe certain offences require the death of the offender find such practices are not tolerated in many European countries. Not all cultural or religious differences can be resolved by tolerance, as any woman who has travelled and worked widely will know. Autonomy and tolerance are practical, subsidiary 'virtues', as they rely on secure social conditions and primary ethics. Such ethics are convenient for the rich and intellectually lazy. Trust is essential for professional-client and doctor-patient relationships. This then requires particular kinds of society and judicial systems. Some societies do not allow women to consult male doctors. Instead, the husband or father will consult the doctor, and described the symptoms. Implicitly, if not explicitly, the society believes that male doctors automatically defile a women by hearing her symptoms, or that men cannot be trusted not to molest women, even men who are doctors. Other societies assume that, in general, professionals can be trusted not to abuse their position.
214
JUSTICE Justice is the fair or equitable treatment of people, and the use of authority to maintain what is right. It is another technological aspect of ethics, and requires evidence on which to base judgments of equality of treatment. Judicial decisions reveal the ethics of the judges and systems of law. With vaccination of children, a judicial system might have to consider the competing claims of mother, father, child and community, and hence rely on ethical theories that provide methods for choosing between people’s desires. In England, women and men are theoretically equal. Judgements in rape cases evaluate the reality of this ethic. In the 1980s, three men broke into a house, and tied up two men there. Two of them then viciously raped and buggered a young woman, though the older man tried to stop them. The older man was given a five-year prison sentence for stealing a video-recorder. The rapists were given less than two years imprisonment. Other judges have given lenient sentences so that the rapists’ careers won’t be too disrupted. Material possessions and men were rated higher than a woman‘s life. In medical negligence cases, the focus is usually on a single claimant. The collective impact of such cases has affected the practice of medicine in both the UK and the USA, which is not entirely positive. Excessively defensive medicine drives up costs, discourages professionals and can lead to harm. Often expensive technological interventions are presumed to be beneficial, without debate about the evidence used to evaluate the danger of not intervening. In a tax-funded system, the methods chosen to pursue and evaluate claims affect how much money is diverted from other health care uses. A court-case against Nelson Mandela, as President of South Africa, by multinational pharmaceutical companies, who objected to the use of cheap generic HIV drugs, illustrates the dominance of narrow utilitarianism in many countries. Claims that the USA is the defender of individual liberty and well-being ring hollow when US companies can inflict great damage with legal impunity, such as in the Bhopal disaster; the Kyoto agreement is ignored; or when the government erects trade barriers for steel or threatens the EU with trade sanctions because EU citizens are given food labels which allow them to choose whether to accept GMO products. Such judgements imply that actions that threaten the profits of the wealthy are deemed unjust - and unethical? Such narrow utilitarianism looks like the ethics of the tyrant: ‘might is right’; my life is far more important than yours. George Bush has criticised Europe for objecting to genetically modified organisms; he supports the claim that these agricultural products are intended to address hunger. However, the stated aims of Monsanto, a major player in the GMO field, indicate that ‘maximise my profit’ ethics are the basis of their strategy: “The business logic of sustainable development is that population growth and economic development will apply increasing pressure on natural resource markets. These pressures, and the world’s desire to prevent the consequences of these pressures if unabated, will create vast economic opportunity - when we look at the world through the lens of sustainability we are in a position to see current - and foresee impending - resource market trends and imbalances that create market needs. We have further focussed this lens on the resource market of water and land and there are markets -in which there are predictable sustainability challenges and therefore opportunities to
215
create business value.. . Monsanto‘s Water and Aquaculture business, like its seed business, is aimed at controlling vital resources necessary for survival, converting them into a market and using public finances to underwrite the investments. A more efficient conversion of public goods into private profit would be difficult to find.. . “The right to water is the right to life.” (Monsanto Strutegypuper on water Report, 1991) The choice of ethical framework and its judicial implementation is critical in addressing planetary emergencies in health and water. Any policy that aims to limit the spread of infectious diseases in the developing world will be affected by conflicts between justice based on economically defined utilitarianism, and deontological or virtue-based ethics. Using poor populations as test beds for drugs, or their lands for storing hazardous wastes, has been described as exploitation of poverty. The assumptions that states make need to be made explicit. Listening to the language of most northern states, we hear utilitarianism: it drives the reasoning of politicians and economists, with concern for economic indicators overriding most other considerations. Here, the people whose happiness is to be maximised are defined to be ‘our shareholders‘ or ‘our race’ or ‘our state‘. Extended use of the metaphor of control of decisions by a market provides a favourable environment for ethics based on autonomy. People are presumed to be able to buy their health and to select their ethical codes. Those who believe that justice includes consideration of human rights - or basic human needs - will advocate ethical and judicial systems that aim to ensure a basic level of existence and development. In order to be fair, we need to know how communities live, what resources and opportunities they have. Deciding how countries and peoples are faring socially is very difficult. What can, and should be measured, and the challenges of respecting regional differences while obtaining comparable development indices is comprehensively addressed by Lievesley (2001). Statisticians have an important role in defining and refining, and statistical ethics states a duty to colleagues of support and information. INFECTIOUS DISEASE EPIDEMIOLOGY Some of the ethical challenges of research into treatment for infectious diseases in developing countries were extensively debated as a result of placebo controlled trials of anti-retroviral treatment to reduce mother-child transmission of HIV (Hutton, 2000). The arguments about the ethical factors to be considered depended substantially on the question the researchers chose to address and their view of evidence. The scope of many of the issues raised about the ethics of medical research is determined by how narrowly or broadly the research question is phrased, and which codes of ethics are considered. The debate started from the questions ‘what is appropriate care for control groups‘ of one drug aimed at one aspect of one disease, as placebo controlled trials were run in developing, but not developed countries. Initially debate focussed on which study designs, and what analyses and assessments of risk were best, as the best scientific methods were seen as the most ethical. But the quality of evidence required should be evaluated in terms of the research question and the social context (Ashcroft et a1 1998). For example, what are the prospects of an HIV free infant whose mother dies of AIDS? The value of preventing mother-child transmission will depend on nations’ willingness to care for orphans. If the appropriate control is the local standard of care, questions arise on the possibility of
216
changing the costs or choice of treatment, or different methods of control of disease. This extends the debate to justice, politics and economics. Claims that the purpose of trials is to benefit poor nations can be evaluated by the prices set for access to the products being investigated. Local standards of care are a reflection of culture, trade laws and history. National infrastructure and education affect the options available for the control of epidemics. A universal standard of ethics is possible; this does not require the application of absolute principles to social questions (Resnick 1998). Infra-structural limitations in Africa raise further practical and ethical challenges. For example, following up patients with HIV or tuberculosis, to ensure that the full treatment is taken or to assess effectiveness of interventions, is difficult when many have no recognisable address. Solutions such as patient-held records, or visits to informal settlements have been proposed. However, without considerable care, these researchfocused solutions can put patients at risk. In informal settlements in South Africa, a visit to a dwelling by a stranger, particularly a stranger associated with health or social care, brings stigma, and often leads to the assumption that the residents have HIV. Patient-held records could facilitate surveys of disease prevalence, but with limited literacy, patients might ask others to explain the content of the records, and thus reveal information that they would prefer to be confidential. Violence against people with certain diseases is not UnknOWn.
Concern to assess and limit the spread of HIV means that pregnant women are routinely screened for HIV. As well as being relevant to the care of the infant, this information is potentially important for the woman's partner. A conference presentation of results of a Kenyan study revealed that about a quarter of HIV positive women informed their husbands. This led to a lively debate about the risks to women of revealing their HIV status, and the responsibility of health care personnel to patients' relatives. Some men claimed it was irresponsible of the women not to reveal their status, others recognised that the fear of violence and being left with no home and support was well founded. The effectiveness of social and judicial systems also affect the spread of sexually transmitted diseases. Most teenagers surveyed in Malawi knew that abstinence was the best preventive method, and about the use of condoms. However, the majority of girls who had had intercourse by the age of 17 had been forced. In Uganda, secondary school education is not free, so girls can be faced with the choice between avoiding the risk of HIV or education, as prostitution is their only source of money. A judicial system that minimises the injustice of rape (as mentioned for UK) both reflects and endorses cruel treatment of girls and women. Breast-feeding by HIV positive women is a further difficult issue. The risk of transmission of the virus has to be assessed within circumstances often difficult for both the mothers and the researchers. Bottle-feeding without access to clean water and income to buy sufficient milk powder carries very high risks, and the risks of breast-feeding while receiving ART need to be studied. The evidence needed to make a sensible decision includes not only the risks of transmission and bottle-feeding, but also knowledge of the pressures which multinational companies will assert to maximise their profits. The promotion of milk powder has long been a matter of contention, and developed countries have already begun to restrict access to water. Before advising a mother not to breast-feed, doctors ought to assess the local economic situation. The choices of governments, corporations and citizens on the legitimacy ('justice) of markets
217
in services such as water and education are part of the moral process of addressing epidemics. The health of a nation is linked to the education and well-being of women, as mothers and wives are generally responsible for hygiene and nutrition. Infectious disease epidemics are more serious where there is poverty, as poor access to food and water allows infections to lead to disease and to death. Malaria is responsible for more than a million deaths a year, and leads to poverty, as it limits children's education and the strength of adults. Science driven by state-based utilitarian ethics ignores non-profit making diseases such as malaria and tuberculosis in other states. Malaria research is neglected; the disease affects mainly poor people in countries that are unlikely to buy treatments at a price including a large profit margin. In 2001 at Doha, all World Trade Organisation members, including the United States of America, agreed that public health concerns should trump patent rights. But since then, the U.S.A. has consistently obstructed negotiations to provide access to affordable medicines for poor countries (Source: Oxfam UK). A strategy to deal with the stated-based utilitarianism on its own terms would be to emphasis the loss of consumers, or profit generators, arising from the death of the middle generations through AIDS, and loss of hture consumers who succumb to malaria and tuberculosis. The only epidemic capable of limiting population growth is HIV, as its main effect is on the active adult population. Non-governmental organisations have started to make use of commercial pressure to achieve humanitarian ends. CONCLUSION Attitudes to debts incurred in the knowledge of high risk of default, and assumptions that governments will protect Western banks are part of the moral environment within which we consider how to control epidemics. I suggest that discussion of ethical issues in infectious disease epidemiology leads to questions about world politics and concepts of states and justice. It is essential to look beyond traditional medical ethics that sees only the doctor-patient dyad, so that there is collaboration between a wide range of professionals, if infectious diseases are to be understood and controlled. REFERENCES R.E. Ashcroft, D.W. Chadwick, S.R.L. Clark, R.H.T. Edwards, L. Frith and J.L. Hutton. (1998) Implication of socio-cultural contexts for the ethics of clinical trials. In Health Services Research Methods: A guide to best practice (Ed. N. Black, J. Brazier, R. Fitzpatrick and B. Reeves. London. Brit Med Jou. 108-116. J.L. Hutton. (2000) Ethics of medical research in developing countries: the role of international codes of conduct. Stat. Meth. Med. Rex, 9:185-206. International Statistical Institute. Declaration of Professional Ethics. Voorburg, 1985. D. Lievesley. (2001) Making a Difference: A Role for the Responsible International Statistician. The Statistician. 50:367-406. D.B. Resnik. (1998) The ethics of HIV research in developing nations.
Bioethics, 12:286-306.
IS ACCESS TO HIV/AIDS TREATMENT A HUMAN RIGHT? LESSONS LEARNED FROM THE BRAZILIAN EXPERIENCE IVAN FRANCA-JUNIOR,MD Ph.D. Faculty of Public Health, University of SBo Paulo, Brazil INTRODUCTION Some may feel puzzled trying to answer such a question without any historical and practical experience in regards to AIDS dynamics in developing countries. This article tries to give some input by reviewing the history of the AIDS epidemic in Brazil: a) health impacts; b) governmental and societal responses; c) the Brazilian AIDS national drug policy and d) human rights analysis of the issue.
AIDS IMPACT ON HEALTH
In Brazil, the first AIDS cases were reported exactly twenty years ago. As of December 2002, 257.780 cases have been reported. At the beginning, the Brazilian epidemic was very similar to the USA, affecting mainly homosexuals, IV drug users (IDU) and people who had received transfusion of blood or blood products. This is no longer true. Since 1993, heterosexual contact is the major HIV/AIDS exposure category in Brazil (MinistCrio da Saude 2002). ‘
218
219
(10.1%). Out of the total, 51 (5%) were living in orphanages and other institutions (Doring, Franqa-Junior, Stella 2003). Another ongoing study among adolescents living with HIVIAIDS in Slo Paul0 has indicated that fear of stigma and discrimination has a big impact on disclosure, access to care and impact on various areas of their social and educational lives, increasing their isolation and vulnerability to AIDS as well to other rights violations (ECVBrazil2002). Additional health and social impacts due to AIDS may arise if incidence increases in poor people, and heterosexual men and women and populations of small cities are not looked after. Brazil is a middle-income country (US$ 4,350 in 1999) with a relatively low AIDS prevalence (0.65%), but its epidemic has proven to severely impact the population, and to be multiform, complex and far from being resolved (UNAIDS 2002). SOCIETAL AND GOVERNMENTAL RESPONSES TO THE EPIDEMICS
In 1983, a Program on AIDS was established in the state of S l o Paulo, the first in Latin America, joining a group of public health professionals to work with epidemiological surveillance. It was a new disease, without an established cause. At that time this new disease was not even nominated as AIDS. “Gay pest” and “Gayrelated Immunodeficiency Disease” were common names, all of them with a stigma attached (Kalichmann 1993). Two years later, the National Program on STD/AIDS was created. The novelty was that the members of these programs were former political and social activists against military dictatorship (1961-1985), as well as committed and well-trained public health professionals. They were able to establish important national and international alliances in the governmental and societal responses (MinistCrio da Saude 2003). According to Daniel and Parker (1991:26) “The epidemic started its development atprecisely the same time as Brazilian society was tiying to take thefirst steps toward re-establishing a participative democracy, after two decades of an authoritarian regime.” That is why we could see the emergence of an organized civil society, which has played a major role. In 1985, the Support Group for the Prevention of AIDS (GAPA) was founded in Slo Paulo, which was followed by the establishment of other GAPAs in many states in Brazil. In 1986, another important group was founded, the Associaq5o Brasileira Interdisciplinar de AIDS (ABIA 2003) by Betinho, Herbert Daniel, Richard Parker, among others. 0 Grupo pela Vidda was founded by Herbert Daniel and aimed at gathering together people living with HIVIAIDS, their friends and families. Many other groups were created, diversifying the scope of the activities and the populations targeted (Silva 1998, Grupo pela Vidda 2003). In Brazil, as opposed to the USA’s response mainly based on a gay community mobilization, social response occurred from the development of a variety of groups created to combat AIDS as their specific priority (Daniel, Parker 1991). These NGO have been developing various activities from prevention to rehabilitation. In the first decade, for instance, the NGOs pushed, successfully, members of the parliament to eliminate the commercialization of blood donation in 1988 (Constituinte proibe ... 1988). At that time they also pioneered the combat of AIDS associated stigma and discrimination (Silva 1998). They were important partners in contacting hard to reach communities, promoting the use of condoms, HIV
220
testing and the seeking of health care. They have also been at the forefront, pushing governments for the distribution of condoms for vulnerable populations. Nowadays, more than 200 NGOs are developing 1.780 projects with the Ministry of Health, which accounts for an investment of US$ 31 million (MinistCrio da Saude 2003). This long-standing, but sometimes contradictory partnership has allowed very creative and fruitful initiatives in the areas of public health and human rights. In the nineties, NGO focus changed (Galvlo 2002a). It emerged as transnational activism, pushing for global justice regarding the access to HIV/AIDS treatment, particularly antiretrovirals. It is important to mention that national, state and local governments have been pushed, through legal petitions and political pressure, for the distribution of antiretroviral (ARV) drugs. There were two remarkable moments in 1999 and 2000. There were rumors that the Brazilian government, due to rapid and important currency devaluation, was questioning the possibility of stopping the acquisition of AIDS related drugs. Important demonstrations were organised in many Brazilian cities by NGOs and health professionals. Eventually, the Federal Government gave up (Galvlo 2002a). In both years, the government expenditures on AIDS drugs were the highest, reaching 3.2% and 2.9% of the annual Ministry of Health’s budget, respectively. For 2003, expenses are at 1.87% (Ministerio da Saude 2003). After 20 years, the National AIDS Program has been recognized as an example of good interaction between governmental and non-governmental policies. Recently, the Brazilian National AIDS Program has been awarded the 2003 Gates Award for Global Health, a 1 million dollar prize (Bill & Melinda Gates Foundation 2003). According to Dr. William Foege, senior fellow at the Bill & Melinda Gates Foundation and Chairman of the Global Health Council’s Board of Directors, “Brazil has shown that with perseverance, creativity, and compassion, it is possible for a hard-hit county to turn back its AIDS epidemic“. And he added “Brazil is saving lives and saving resources at the same time, and that should be an inspiration to countries around the world.” Dr. Nils Daulaire, president and CEO of the Global Health Council, said, .‘the Brazilian National AIDS Program broke the logjam in the debate over AIDS treatment. Brazil showed the world that what was thought to be impossible - treating people with AIDS in a developing countv - was indeed possible in the context of a comprehensive AIDSprogranz, and that effective prevention und treatment efforts are enormously and mutually reinforcing.” ACCESS TO ARV DRUGS IN BRAZIL: PRODUCTION AND FREE DISTRIBUTION From 1991 on, the National AIDS Program has been distributing zidovudine (AZT) to AIDS carriers in the public health system (Passarelli 2001). Since then, a complex institution has been set up to ensure free access to HN/AIDS treatment. This policy includes: a) a network of public alternative care services (= 900 services); b) a national network of voluntary counseling and testing for HIV ( ~ 2 0 8services); c) a national network of laboratory support (HIV viral load: 73 laboratories; CD4+ cell count: 65 laboratories; HIV resistance testing: 12 laboratories); and a national ARV logistic control system in 424 dispensary units (Teixeira 2002).
221
In 1993, Brazil started the production of the first AIDS drug (AZT) through a private company, followed by other governmental institutions. AZT was first synthesized in 1964 as a possible anticancer drug but it proved ineffective. Almost 30 years later, intellectual property rights no longer protected AZT. In 1996, a Federal Law established a comprehensive policy on HIV/AIDS treatment (Ministkrio da Saude 1996). This law was due to the persistent fight put up by community groups by bringing actions against their governments to ensure the right to treatment. Currently, Brazil is able to produce 8 out of 15 ARVs. These 8 ARVs are: didanosine (ddI), lamivudine (3TC), zidovudine (AZT), stavudine (d4T), zalcitabine (ddC), indinavir, nevirapine and the AZT+3TC associations in one single pill. FarManguinhos (a Federal Government Laboratory) produces, approximately, 6 out of 15 ARVs used in Brazil (Far-Manguinhos 2003). All of them have been tested and approved as regards bio-equivalence and licensed as generics. Six other companies and governmental laboratories can produce other ARVs as generics (Ministerio da Saude 2001). To accomplish such goals, it must be remembered that Brazil, so far, has not compulsorily licensed any drug. Brazil establishes a strategy of local production, respecting intellectual and property rights. According to GalvFio (2002b): “Brazil has threatened to break patents on certain antiretrovirals r f companies do not reduce prices. Such a measure, called compulsory licensing, would be admissible under Brazilian patent law, under certain circumstances. It must be emphasised, however, that despite such threats, and since Brazil implemented the TRIPS (trade-related aspects of intellectual property rights) agreement in 1996, compulsory licensing has not been applied, and no patent has been violated. Nonetheless, the threat has remained an important negotiating tool. For example, in February 2001, Brazil announced it was considering breaking patents for nelfinavir (Roche) and efavirenz (Merck) fi manufacturers did not reduce their prices. Negotiations ensued, and Merck agreed to reduce the price of efavirenz by 60%. Reductions offered for nelfinavir however were deemed inadequate.” Table 1 depicts the trajectory of expenditures figures in Brazil. Table 1 Public access to ARV therapy according calendar year and costs.
environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf environment south asia south edfdf sdsdksdf skdfsd fksdjf
mdsfdfd mdsfdfd mdsfdfd mdsfdfd mdsfdfd mdsfdfd mdsfdfd
(Sources: Ministkrio da Saude 2001; Teixeira 2002)
In Table 2, we show some differences between the USA and Brazilian (Ministry of Health - MoH) policies regarding AIDS treatment (Teixeira 2002). Brazil has a cheaper program with a higher coverage and a more complete ARV scheme.
222
Table 2 Comparison of US and Brazil policies on AIDS treatment. Parameters I MoH (Brazil) ** environment south asia south edfdf ADAP(USA)* sdsdksdf skdfsd fksdjf mdsfdfd Target HIV I Low Income Universal environment south asia south edfdf sdsdksdf skdfsd fksdjf mdsfdfd environment south asia south edfdf sdsdksdf skdfsd fksdjf mdsfdfd environment south asia south edfdf sdsdksdf skdfsd fksdjf mdsfdfd environment south asia south edfdf sdsdksdf skdfsd fksdjf mdsfdfd environment south asia south edfdf sdsdksdf skdfsd fksdjf mdsfdfd Aids Drugs Assistance Program ** Ministry ofskdfsd Health environment south* Source: asia south edfdf sdsdksdf fksdjf mdsfdfd Teixeira 2002
If Brazil had not established such a policy, expenditures could have reached, just for these 8 ARV drugs, the amount of R$1.325 billion! This would jeopardize any treatment effort. In 2003, unfortunately, the purchase of only 3 ARV drugs (Nelfinavir fi-om Roche, Lopinavir from Abbott and Efavirenz from Merck & Co.) have used up 63% (US$ 573 million) of the national ARV budget (MinistCrio da Sadde 2003). Brazilian government has just decreed, according to the WTO agreement on ARV generics, that “a patent can be compulsorily licensed for national emergencies or public purposes, (..) only for public non-commercial use (...) when attested that the patent holder, directly or by its representatives, does not fulfill the needs (Presidhcia da Repdblica 2003: l).” All these efforts have paid off. We have had major impacts by antiretroviral therapy policy from 1996 to 2001. Brazil has reduced AIDS mortality by 40-75%, morbidity by 60-80% and hospitalizations by 85%. It has also saved U$ 1.1 billion (Teixeira 2002). In regard to ARV treatment monitoring indicators, recent data indicate an overall adherence of 75.0% (CI 95% 73.1 - 76.9%) and primary HIV resistance of 6.6%. These values are similar or even better than developed countries (Nemes, Souza, Carvalho 2002; Bnndeiro et al. 2003) The Brazilian experience of AIDS treatment has shown that it is a viable strategy, even without international funding and any TRIPS violation. Despite its viability in Brazil, a question remains: Is access to AIDS treatment a human right? A HUMAN RIGHTS APPROACH TO AIDS TREATMENT ACCESS First of all, we must distinguish, in international human rights law, two different human rights characteristics (Mann, Gruskin, Grodin, Annas. 1999). There are some rights that are to be considered absolute. In other words, these human rights cannot be alienated for any reason. They are the right to life; not to be discriminated against; not to be tortured or punishedtreated in a degrading, cruel and inhumane way; not to be enslaved or submitted to servitude; not to be arrested for not fulfilling contractual obligations; non-retroactivity of criminal offences; to be recognized as a person by the judicial system; liberty of thought, conscience and religion. On the other hand, rights that could suffer derogation include the right to liberty, freedom of speech, property, movement, education, health and so on. One of the main issues surrounding AIDS treatment is the human rights conflict involving, on one side, the right to intellectual property (claimed by pharmaceutical
223 companies and the US government) and, on the other side, the right to life, right to health, and the right to not be discriminated against (claimed by AIDS activists and the governments of developing counties). As we can see, there are some absolute rights involved in such a conflict. Namely, the rights to life, and to not be discriminated against. Apart fiom this theoretical approach, Brazil and the USA have had some confrontations in the World Trade Organization (WTO) in regard to AIDS drugs patents. WTO accepted, in January 2001, a US government request for a discussion panel on the Brazilian patent law and its compatibility with the TRIPS agreement, specifically as related to the provisions on compulsory licensing. Brazil has received many manifestations of solidarity supporting its free-of-charge distribution of ARV drugs policy. More importantly, according to Galvao (2002b:3), “In April 2001, in what could be perceived as a sign of support for the Brazilian position, the 57th session of the UN Human Rights Commission approved a resolution, proposed by the Brazilian delegation, establishing access to medical drugs during pandemics-such as HIV/AIDS--as a basic human right. In June 2001, shortly before the UN General Assembly Special Session on HIV/AIDS, the USA withdrew the WTO panel against Brazil. In November 2001, the WTO, at its fourth ministerial conference, released a declaration allowing use of compulsory licensing in cases of national public-health emergencies. This declaration not only strengthened the Brazilian position but also suggested that the transnational movement for access to drugs for HIV/AIDS could lead to poor nations being able to acquire the necessary drugs.” In the UN Resolution, it is clearly stated that the UN Human Rights Commission: “Recognizes that access to medication in the context of pandemics such as HIV/AIDS is one fundamental element for progressively achieving the full realization of the right of everyone to the enjoyment of the highest attainable standard of physical and mental health” It is important to mention that this resolution was approved by a roll-call vote of 52 votes to none with 1 abstention. Finally, an agreement may have been reached by all countries, under WTO auspices, on 30 August 2003 (WTO 2003). Government members gathered in a WTO General Council meeting were able to find a common ground to break their standoff over intellectual property protection and public health. They agreed on legal changes that will make it easier for poorer countries to import cheaper generics made under compulsory licensing if they are unable to manufacture the medicines themselves. The document issued was called the Implementation of paragraph 6 of the Doha Declaration on the TRIPS Agreement and public health. Public health is a much more important matter than intellectual property rights. As Panitchpakdi (2003) put: “WTO Members recognize that the system that will be established by the Decision should be used in good faith to protect public health and, without prejudice to paragraph 6 of the Decision, not be an instrument to pursue industrial or commercial policy objectives”. The story of an effective international policy on HIV/AIDS treatment as a human right has just begun. Scientists, throughout the world, must join in the efforts to accomplish this goal. Without treatment, people living with HIV/AIDS are sent home to suffer and to die and no secondary prevention is possible with them. To treat is a human must and a technical need.
224
REFERENCES 1. 2.
3.
4.
5. 6. 7.
ABIA.2003. Available at [2003 Sep 111 Bill & Melinda Gates Foundation. Brazilian National AIDS Program Receives 2003 Gates Award for Global Health [online]. Seattle; 2003. Available at [2003 Sep 111 Brindeiro RM, Diaz RS, Sabino EC, Morgado MG, Pires IL, Brigido L, Dantas MC, Barreira D, Teixeira PR, Tanuri A,, The Brazilian Network for Drug Resistance Surveillance. Brazilian Network for HIV Drug Resistance Surveillance (HIV-BResNet): a survey of chronically infected individuals. AIDS, 17:1063-1069,2003 Brito AB, Castilho EA, Szwarcwald CL. AIDS e infecqlo pel0 HIV no Brasil: uma epidemia muldfacetada (AIDS and HIV infection in Brazil: a multifaceted epidemic). Revista da Sociedade Brasileira de Medicina Tropical 34(2): 207217,2000. Constituinte proibe toda comercializaqlo de sangue. Jornal do Brasil, Rio de Janeiro, 1988 maio 18; cad. 1: 3. Daniel H, Parker R. Aids - A Terceira Epidemia. Slo Paulo: Iglu, 1991, p.2439. Dhalia C, Barreira D, Castilho EA. A AIDS no Brasil: situaqlo atual e tendhcias. Boletim epidemiol6gico de AIDS 2000 [online]. Brasilia (DF); 2000. Available at ~URL:http://www.aids.gov.br/udtv/boletim%5Fdez99%5Fjun00/aids~brasil.htm
8.
9. 10. 11.
12. 13. 14. 15. 16.
17.
> [2003 Sep 113 Doring M, Franqa-Junior I, Stella I. Relat6rio tCcnico final. Projeto Viver ou conviver com o hiv: um desafio para crianqas 6rf5s da aids. Port0 Alegre; 2003. ECI Brasil. Relat6rio preliminar. S l o Paulo; 2003 Far-Manguinhos. 2003. Available at [2003 Sep 111 Galvlo J. A politica brasileira de distribuiqgo e produqlo de medicamentos antiretrovirais: privilegio ou um direito? (Brazilian policy for the distribution and production of antiretroviral drugs: a privilege or a right?) Cad. Saude Publica, Rio de Janeiro, 18(1):213-219,jan-fev, 2002a. Galvlo J. Access to antiretroviral drugs in Brazil. Lancet, London, November 5, 2002b. Grupo pela Vidda. 2003. Available at [2003 Sep 111 Kalichman, A.O. Vigilincia epidemiol6gica de AIDS: recuperaqlo histdrica de conceitos e praiticas. S.P., 1993; DissertaqBo de mestrado / FMUSP. Mann J, Gruskin S, Grodin MA, Annas GJ. Health and Human Rights: a reader. Routledge: New Cork; 1999. MinistCrio da Saude. Lei No 9.313, De 13 de Novembro de 1996 [online]. (DF); 1996. Available at: Brasi1ia [2003 Sep 111 Ministbrio da Saude. Terapia anti-retroviral e saude publica: urn balanqo da expericncia brasileira [online]. Brasilia (DF); 1999. Available at
225 18. MinistCrio da Saude 2001b Acesso universal e gratuito [online]. Brasilia (DF); 2000. Available at ~URL:http:/lwww.aids.gov.brlpoliticdacesso%2Ouniversa10/o2Oe%2Ogratuito/ac esso.htm> [2003 Sep 111 19. MinistCrio da Saude. National AIDS drug policy [online]. Brasilia (DF); 2001. Available at: ~URL:http://www.aids.gov.br/final/bibliotecddrug/drug6.htm> [2003 Sep 111 20. MinistCrio da Saude. Boletim epidemiol6gico de AIDS 2002 [online]. Brasilia (DF); 2002. Available at <mu: http://www.aids.gov.br/final/biblioteca/ [2003 Sep 111 21. MinistCrio da Saude. 0 perfil da aids no Brasil e metas de govern0 para o controle da epidemia [online]. Brasilia (DF); 2003. Available at 12003 Sep - 11124. Passarelli CA. As patentes e 0s remCdios contra a AIDS: uma cronologia. Boletim ABIA, 2001, 46, p. 8-9. Available at 12003 Sep 111 29. UNAIDS. Brazil: Epidemiological Fact Sheets on HIVlAIDS and Sexually Transmitted Infections- 2002 Update [online]. Geneve; 2002. Available at [2003 Sep 111
This page intentionally left blank
8.
AIDS AND INFECTIOUS DISEASES: AIDS VACCINE STRATEGIES
This page intentionally left blank
SYSTEMIC AND MUCOSAL IMMUNE RESPONSES INDUCED BY HIV-1 DNA AND HIV-PEPTIDE OR VLP BOOSTER IMMUNIZATION JORMA HINKULA CLAUDIA DEVITO', BARTEK ZUBER', FRANCO M. BUONAGUR04, REINHOLD BENTHIN', BRITTA WAHREN", ULF SCHRODER~ 1. Swedish Institute for Infectious Disease Control, Department of Virology, 171 82 Solna 2. Eurocine AB, Karolinska Science Park, 171 77 Stockholm, 3. Karolinska Institute, Microbiology and Tumorbiology Center, 171 77 Stockholm, Sweden 4. Div.Vira1. Oncology & AIDS Reference Center 1" N u . Tumori "Fond. G. Pascale", Naples, Italy.
ABSTRACT & AIM Immunization performed intranasally with HIV-1 DNA gpl60/rev plasmid with or without CCRS-DNA plasmid in a novel mucosal cationic lipid adjuvant and gp41 peptide in L3 lipid adjuvant boostered a long-lasting HIV-1 subtype A, B and C specific systemic and mucosal B- and T-cell immune memory in mice. The inclusion of the HIV co receptor CCRS component in the vaccine may have enhanced the immunity towards HIV antigens. HIV-1 neutralizing antibodies were induced only in animals who received DNA-plasmid immunizations both in serum and in rectal mucosa. Systemic IFN-gamma secreting cell-mediated responses against the HIV- 1 envelope were seen still 12 months after immunization with HIV-1 DNA and peptide if N3/L3 adjuvants were used. Peptide immunised mice responded by developing a long-term gp41 specific systemic non-HIV neutralizing serum IgG responses but a shorter, upto 9 months long, mucosal gp41 specific IgA immunity. Intranasal administration of low amounts of recombinant HN-1 gag/gpl20 virus-like particles (2ug rVLP gag/gp120) resulted in serum IgG/IgA, vaginal and rectal IgA against HIV-1 gag antigens at after one single immunization when the L3 adjuvant was used. Long-term serum IgG and IgA against the human and macaque CCRS N-terminal and 2"douter loop region were detectable 8 months after the last immunization in one out of two macaques. The study was designed to analyse whether HIV-1 envelope DNA and gp41 transmembrane peptides mixed in novel adjuvants for mucosal immunization could provide a broadly HIV-1 subtype recognizing long-term HIV-1 envelope and/or CCRS peptide specific immunity, similar to what has previously been reported in highly exposed persistently HIV- 1 seronegative individuals.
229
230 INTRODUCTION Over 20 years of HIV research have, so far, not resulted in a preventive vaccine against HIV. Efforts have been made to better understand which mechanisms and molecules mediate a systemic and/or mucosal immune response against HIV-1 and which prevent or delay disease progression (McMichael et al. 2003). Study cohorts which have provided intriguing and possible clues on these issues are highly exposed persistently seronegative individuals (HEPS) and HIV-infected long-term nonprogressors (LTNP). Reasons for their apparent protection may be a genetic polymorphism of HLA genes or CC chemokine receptor genes, CCR5. However, the 32 base-pair deletion in the gene that encodes the coreceptor CCRS is present in only 2-4% of Caucasian individuals. The presence of innate and adaptive cell-mediated factors such as cytotoxic T lymphocytes (CTL) and T-helper cell responses which might protect against the establishment of infection by limiting viral dissemination has been described for HEPS and for preventing disease in LTNF'. It has been postulated that HIV-specific T-helper or cytotoxic T-lymphocytes can cross-react with different HIV subtype sequences. In studies on primates protective immunity has been obtained by passive immunotherapy, both with polyclonal and monoclonal antibodies (Putkonen et a1.1991, Baba et al. 2000). The presence of antibodies in serum and mucosal samples can thus limit virus replication. In studies performed on humans it has been shown that serum and mucosal IgA from HEPS can neutralize primary isolates of HIV-1. Further it was shown in vitro that mucosal IgA collected from African HEPS could neutralize HIV-1 primary isolates representing many subtypes and were capable of inhibiting HIV-1 transcytosis in vitro (Devito et al. 2000, Belec et a1.2001). Studies performed on LTNP have revealed that proportions of these individuals have potent, broadly HIV-neutralizing antibodies in serum, thus probably supporting the protective role of neutralizing antibodies against HIV. Recently, it has been shown that IgA of HEPS recognized an epitope located on the gp41 protein, which differs from the IgA epitope recognized by HIV infected individuals. It was shown that serum IgA molecules from HEPS bind to an epitope (restricted to aa 581-584 (LQAR) which corresponds to the conserved coiled coil pocket in the alpha helic region of gp41 (Clerici et al. 2002). Studies performed by Bomsel et al. 1998 further support the interest in aiming for a vaccine-induced antibody response towards the gp41 transmembrane epitopes (aa 661-675) especially for mucosal immunity. This specific epitope was shown to bind to putative mucosal HIV receptors and that antibodies against or peptides representing this region were able to inhibit virus attachment to epithelial cells in vitro. Previous studies have also shown that certain HIV-1-exposed individuals may develop antibodies against the HIV- 1 NSI phenotype co receptor CCRS (Lopalco et a1.2000). The protective capacity of antibodies directed against this co receptor has thus been investigated as a vaccine approach with interesting possibilities in vivo. Considering that experimental approaches may be necessary in the design of an effective prophylactic vaccine against HIV, we have evaluated the immunogenicity of the gp41 coiled-coil-pocket in mice and if these immune responses can be enhanced by combining HIV-1 rgpl60 DNA, CCRS DNA and CCRS peptides with other well-conserved epitopes of the envelope from clade A, B, C and D representing HIV-1 clades circulating in Subsaharan Africa represented by two Ugandan strains (A and D), one South American clade C from Brazil and the clade B/MN isolate from Western Europe and USA. A collection of targeted epitopes from subtypes A to D could provide both a broad HIV-1 clade immunity combined
231 with auto antibodies against the prominent co receptor CCR5 bound to the HIV-1 envelope. We hypothesised that this approach would result in a more robust neutralizing immunity, better able to resist development of escape mutations so commonly seen with HIV-1 virus. The aim was further to induce systemic and mucosal immunity of long duration by using safe, low cost, efficient and temperaturestable novel antigedadjuvants for delivering non-live experimental vaccine candidates against HIV-1. MATERIALS & METHODS Animals, immunizations and adjuvants: Balb/ c and C57B1/6 mice were immunized twice with 4-8 week intervals. An intranasal HIV- 1 gpl60/rev DNA vaccine prime (8ug DNA/mouse/immunization) followed by a gp4lpeptide (10 ug/mouse/immunization) booster immunization was compared with gp41 peptide or rVLP-gaglgpl20 (2 ug/mouse/immunization) and control immunizations with adjuvant or PBS alone. The DNA plasmids were given mixed with N3 adjuvant (cationic lipid-based adjuvant) or in saline at a total volume of 12 ul/mouse and the peptide/protein boosters were given with L3 adjuvant (an endogenous lipid-based adjuvant) or PBS at a total volume of 14 ul/mouse. Titrations of the N3 and L3 adjuvants were performed to find the minimal required adjuvant amount for obtaining immune responses without side effects (Schroder et al. 1999). Mice were anaesthetized with Isofurane for 1 minute when receiving the intranasal immunogens. The animals were kept under pathogen free conditions according to the ethical permissions at the Swedish Institute for Infectious Disease Control, Stockholm. Sweden. Antigens: HIV-1 gp160 and gagp37 DNA plasmids used represented HIV-1 subtype B/BaL(Hinkula et al. 1997), the human CCRS-expressing plasmid has been described elsewere (Zuber et al. 2001). Virus-like particles gag/gpl20 representing a Ugandan HIV-1 clade A have previously been described elsewere (Buonaguro et al. 2002). Synthetic peptides (amino acids 20-mers) representing HIV-1 gp120 envelope and gp41 transmembrane regions of subtypes A, B, C and D and human CCRS were purchased from ThermoHybaid, Ulm, Germany. Antibody responses: HIV-1 antigen specific serum IgG and IgA, fecal sample, vaginal wash and lung lavage IgA reactivity was analysed by ELISA and HIV-1 neutralization assays. In brief, ELISA plates were coated with recombinant proteins 1 ug/ml or 10 ug/ml of peptides representing HIV-1 gagp24, gp 120 envelope, gp41 transmembrane peptides representing HIV subtypes A, B and C and human CCRS 2"d loop region. Coating was performed with 0,05M sodium carbonate buffer (PH 9,s-9,6). Serum samples were diluted in PBS with 1 mg/rnl BSA, 2% inacivated goat-serum, 0,05% Tween 20 and added 100 ul/well in antigen coated plates, incubated for 90 min. at 37 C. After incubation, plates were washed and conjugates were added, goat-anti-mouse IgG (BioRad, Richmond, CA) or anti-mouse IgA (Southern Biotechnologies, Birmingham, AL) dilution 1:1000, was added at 100 pVwe11, incubated for 2 h at 37O C, and OPD (2 mg/ml orthophenylendiamine in 0.05M sodium citric acid pH 5,s with 0.003% H202) was added as substrate at 100 pl/well. After a 30 minute incubation period, the reaction was stopped by adding 100 pl/well 2.5M HzS04. Absorbances
232 were measured at 490nm. Values above the mean optical density of the negative control plus two SD were used as the cut off and values above were considered as positive. Presence of IgG and IgA in mucosal samples were tested with the same assays but with the first incubation performed at +4 C over night, thereafter as described above. HIV-1 neutralization assays: Heat inactivated mouse sera (56 C for 25-30 min) and Captive A/E purified mucosal IgA were tested for their antiviral activity against HIV-1 isolates representing HIV-1 subtype B (SF2 and 6920), subtype A (92UG029) and subtype C (HIV-1 11160) (Zuber et al. 2000). In brief, HIV-1 and test samples were mixed and incubated for 60 min. at 37 C before added to PHA-ativated human PBMCs (100 000 cells per well) and incubated an additional 16 hours at 37 C, cells were washed twice with RPMI 1640 supplemented with 5% inactivated fetal calf serum, and then cultured for 5-6 days in RPMI supplemented with 10% FCS, 20 m/ml rIL-2, 4 mM L-glutamine, 2uM 2-mercaptoethanol, 2 uM sodium pyruvate, 5IU/ml penicilline and 50 uglml streptomycin. At day 6-7 presence of HIV-1 p24 antigen in the culture supernatants was tested with an HIV-1 p24 antigen capture ELISA (Devito et al. 2000). An 80% reduction of HIV-1 p24 antigen detection was considered as a reliable neutralizing titer (NT80%). Cell-mediated immune responses: T-cell immunity was analysed by T-cell proliferation and cytokine release assays (Hinkula et al. 1997, 2002). In brief, spleen cells were collected from mice at 2,4, 8, 12, 24, 36, 52 and 64 weeks. Spleen cells (200 000 per well) were cultured in triplicate samples, 250 ul/well for 4-5 days in RF’MI 1640 supplemented with 5% inactivated fetal calf serum, 4 mM L-glutamine, 2uM 2-mercaptoethanol, 2 uM sodium pyruvate, 5IU/ml penicillin and 50 ug/ml streptomycin. Cells were cultured in presence of 0.1-1 ug antigen, mitogen and medium as control. At day 4 a Soul volume of tritiated-thymidine (1 uCi) was added per well for 16 h and thymidine incorporation was measured in a beta-counter. The mean reactivity (counts per minute, cpm) was calculated for all triplicate antigens, control antigen, mitogen and medium control. To obtain a value for specific proliferation (stimulation index, SI) the mean cpm value for each antigen was divided by the cpm value for medium. Stimulation was considered positive if the SI value was 3. supernatants were collected at 72 hours after antigen stimulation and Interferon-gamma and interleukin 5 release was measured by commercial ELISA capture assays (R&D Systems, UK). Efficacy of novel mucosal adjuvants for DNA-plasmid administration (N3adjuvant) and for recombinant Virus-like particles (rVLPgaglgpl20) protein or gp41 peptide administration (L3-adjuvant, monooleate/oleic acid) were analysed with regard to their immune enhancing properties. RESULTS Humoral immune responses: Table 1 shows the frequency of serum and mucosal antibody responders to HIV-1 antigens after one or two immunizations with HIV-1 DNA plasmids or rVLPgaglgpl20 alone or after the booster immunization with gp41 peptides. Longterm humoral immunity was shown to persist for over 12 months after the booster
233 immunization shown by the presence of HIV-1 gp41 and CCRS specific IgG and IgA secreting B lymphocytes in spleen and regional lymph nodes in immunized mice. Intranasal HIV-1 gp160lrev-DNA with N3 adjuvant and HIV-1 gpl6Olrev plus human CCRS DNA immunization followed by gp41/CCR5 peptide L3-adjuvant immunization resulted in long-term (>12 months) subtype A, B and C HIV-1 gp41 specific and CCRS 2"d loop peptide B--memory responses in serum. In Table 2 the frequency of HIV-1 neutralizing activity in serum is shown in the different groups at 3-12 months after last immunization. A long-term mucosal (intestinal, vaginal and lung lavage I@) response was obtained in addition to a systemic immune response. The inclusion of the HIV co receptor CCRS component in the vaccine may have enhanced the immunity also towards HIV antigens. In primates, immunization with CCR5-DNA followed by peptide booster induced high titers of CCRS-specific serum IgG and IgA capable of inhibiting CCR5dependent HIV-infection in vitro (Zuber et al. 2002). These immune responses remained in the immunized primates for over 9 months after the last immunization (not shown). In figure 1 the median vaginal IgA responses towards different HIV-1 subtypes are shown over time in animals immunized with and without
HIVgpl60/CCR5-DNA-priming.
Figure 1. Median vaginal wash IgA kinetics against gp41 subtype A and B in four groups of twice intranasaly immunized mice over a period of 12 months. 45
0
gp41 peptides
(A)
HIViCCRS-DNAi@lpept HIV-DNAigp4lpept
Controls
0
(A)
(A)
gp41 peptides
(B)
HIV/CCRS-DNNgp4lpept HIV-DNAigpllpept
Controls
0
1
3
6
9
(A)
(B)
(B)
12
Time (Months) Using the N3 adjuvant for the intranasal DNA-immunizations showed that a 1040-fold reduction of DNA could be obtained. When the N3-adjuvant was titrated for efficacy and safety we were able to show that a final concentration of 1-2% of the adjuvant was efficiently supporting the immune responses even with as little as 0.8ug DNA-plasmid (data not shown). For the presented results in this study the 2% N3 adjuvant concentration was used. Peptide immunized mice responded by developing a
(B)
234
long-term gp41 specific systemic serum IgG responses but a shorter, up to 9 months long, mucosal gp41 specific IgA immunity. The possibility of using rVLP g a g a 1 2 0 instead of peptides is being tested in ongoing studies. Primary results indicate that the dosage of rVLP can be reduced 10-fold (to 2ug/immunization) when recombinant VLP was mixed with the L3-adjuvant as compared with doses needed without adjuvant. Figure 2. Fecal IgA responses against HIV-1 gp160 envelope antigen pre- and postintranasal gp41 peptide booster immunization in HIV-1 gpl60hev DNAN3 adjuvant (0-4%) immunized mice. Fecal wash dilution: 1/4.
Cell-mediated immunity Figure 3A. Release of IL4 in mice intranasally HIV-1 gpl60hev DNA-N3(04%) immunized pre- and post HIV-1 gp41 peptide booster. Gp160 antigen stimulated cells. 80
g
2
~
706050
-
40
-
30
-
.
Figure 38. Responders d F N gamma secreting cells after intranasal HIV-I gpl60lrev DNA-N3 adjuvant (04%) immunization pre- and post HIV-I gp41 peptide booster. Supernatant of rgpl60 antigen stimulated cells in vitro, IFN-gamma. I
3/5 I
In Table 3 and Figures 3A and B the frequency of HIV-1 proliferative and interferon gamma responders are shown. Mice receiving the 8ug dose DNA-plasmids with N3 adjuvant and boostered with L3-adjuvant mixed peptides/proteins always responded more strongly, both as T-cell proliferative responses and as higher amounts of released IFN-gamma or IL-5 than mice receiving antigens without the adjuvants. The possibility of using rVLP gag/gpl20 instead of peptides is being tested in ongoing studies. Primary results indicate that the dosage of rVLP can be reduced 10fold (to 2ug/immunization) when rVLP is mixed with the L3-adjuvant as compared to the doses needed without adjuvant. DISCUSSION HIV-1 neutralizing serum antibodies were induced which were still present 12 months after booster immunization Serum was shown to be capable of neutralizing HIV-1 strains representing HIV clades B=A>C. HIV-1 SF2 neutralizing serum, fecal and lung IgA was detectable only in the DNA primed mouse groups. Immune responses were enhanced by using the novel N3 adjuvant for delivering DNAvaccines as demonstrated by the lower amounts of DNA-plasmid needed to evoke systemic and mucosal immune responses. In fact, it was possible to reduce the amount ofHIV-lgpl60/rev DNA to 0,8 ug DNA in a mixture with 1-2% N3 adjuvant to obtain a detectable mucosal IgA and cell-mediated (IFN-gamma) immune response. In recent years great progress has been made in the field of AIDS- and DNA-vaccination when combined with heterologous boosters with live vectors such as modified recombinant vaccinias (MVA) or adeno virus vectors (Amara et al. 2001, Robinson H.L. 2002). These vaccine candidates have efficiently induced potent and protective cell-mediated immune responses in primates in experimental settings. The general problem with the HIV-vaccines has been to evoke a potent, long-lasting
1
I
2
I
3
I
4
I
5
I
6
1
1
236 humoral immunity preferably present at mucosal sites, the main port for HIV-1 transmission. Passive immunotherapy trials have shown promising possibilities, but similar antibodies have been very difficult to obtain by immunization. Important factors to bear in mind when aiming at functionally important antibodies is that we know relatively little of their efficacy on mucosal surfaces while studies performed by Baba et al. 2001 and others have shown that systemic antibodies can work well. At least in a number of studies, what seems to be the most prominent, broadly neutralization inducing epitopes in the HIV-1 envelope have been detected (Broliden et al. 1992, Muster et al. 1993, Burton DR. 2000, Zwick et al. 2001). The unfortunate finding when the most efficient broadly neutralizing antibodies are being characterized, has been their unusual CDR3 regions, often longer than seems possible to obtain in rodents such as mice or guinea pigs. Further, the N-linked glycosylation of the envelope seem to play an important role in hiding neutralizing epitopes which are still an important factor to take into account when selecting vaccine candidates (Scanlan et al. 2002). The only way to properly investigate the efficacy of a neutralizing, antibody-inducing vaccine will thus always be man. The aim of this study was to develop and analyze at least one HIV-I DNA prime and HIV-1 peptide booster (DNA-PEP) or virus-like particle candidate for a clinical trial. The basis and the strategy of the vaccine efforts are based on the immune study results performed on highly HIV-exposed, persistently seronegative individuals (Shearer et al. 1996, Devito et al. 2000, Belec et al. 2001, Clerici et al. 2002). The facinating finding in parts of these cohorts have been their capacity to develop an immune response specific for the HIV gp41 trans-memberane region as well as against self, such as the HIV-1 coreceptor CCRS. This kind of double-directed antibody response should logically be less sensitive to virus mutations. If it were possible to mimic this kind of antibody response by vaccination, it might result in a more robust protective immunity than when targeting the HIV envelope only. The focus would be to provide HIV-vaccines with non-live vaccine candidates but with the potential of providing the desired immune responses that can be obtained with live vaccines. The main aim of this concept of preventive vaccine would be to provide longlasting mucosal (genital, rectal) sIgA responses against conserved HIV-1 suppressinglinhibiting regions in the HIV envelope, and to understand the basic mucosal immunology behind the intranasally induced immunity. Using intranasal delivery would also often provide a systemic immunity, a second line of immunity in the blood and peripheral organs. Combining a smart vaccine delivery device with new promising adjuvants for mucosal delivery would increase and provide a more longlasting immunity than vaccination without an adjuvant. This study would propose vaccine candidates who would be based on heterologous vaccine strategies where in part circular HIV-DNA plasmids previously shown to be safe and in part synthetic peptides/proteins, and efficiently induce HIV-specific immunity in HIV-infected individuals or animals but which have never been given intranasally in man. Alternatively in our ongoing studies HIV-1 subtype-broad (subtypes A, B and C) minimalistic HIV-DNA candidates are being studied for analysis and comparison of efficacy in inducing broadly HIV subtype recognizing long-lasting memory responses (Ljungberg et al. 2002, Devito et al. 2002). One special task that we plan to address with our envelope vaccines, will be to provide humoral sIgA and systemic IgG immunity towards both phenotypes of HIV, both the rapidhigh and the commonly sexually transferred slow/Iow phenotypes. For these kinds of studies we need novel, safe and inexpensive adjuvants, such as the ones proposed in this study.
237 CONCLUSIONS Intranasal DNA-N3 prime followed by one peptide-13 booster immunization was able to induce a subtype broad humoral B-cell memory and HIV-1 neutralizing immunity for at least half of a mouse’s lifetime. REFERENCES Amara RR., Villinger F., Altman JD., et al. Control of a mucosal challenge and prevention of AIDS by a multiprotein DNNMVA vaccine. Science 2001,292:69-74. Baba TW., Liska V., Hofinann-Lehmann R., Vlasak J., Xu W., Ayehuni S., Cavacini LA., Posner MR., Katinger H., Stiegler G., Bernacky BJ., Rizvi TA., Schmidt R., Hill R., Keeling ME., Lu Y., Wright JE., Chou TC., Ruprecht F W . Human neutralizing monoclonal antibodies of the IgGl subtype protect against mucosal simian-human immunodeficiency virus infection. Nature Med. 2000,6,200-206.. Tranchot-Diallo J., Diallo MO., et Belec L., Ghys PD., Hocini H., Nkengasong IN., al. Cervicovaginal secretory antibodies to human immunodeficiency virus type 1 (HIV-1) that block viral transcytosis through tight epithelial barriers in highly exposed HIV-1-seronegative African women. Jhfect. Dis. 2001, 184:1412-1422. Bomsel M., Heyman M., Hocini H., Lagaye S., Belec L., Dupont C., Desgranges C. Intracellular neutralization of HIV transcytosis across tight epithelial barriers by antiHIV envelope protein dIgA or IgM. Immunity 1998,9:277-287. Broliden P.A., von Gegerfelt A., Clapham P., Rosen J., Fenyo EM., Wahren B., Broliden K. Identification of human neutralization-inducing regions of the human immunodeficiency virus type 1 envelope glycoproteins. Proc.Nat1. Acad. Sci. USA, 1992, 89,461-465. Buonaguro L., Racioppi L., Tonesello M.L., Arra C., Visciano M.L., Biryahwaho B., Sempala S.D.K., Giraldo G., Buonaguro F.M. Induction of neutralizing antibodies and cytotoxic T lymphocytes in Balb/c mice immunized with virus-like particles presenting a gp120 molecule from a HIV-1 isolate of clade A. Antiviral Res. 2002, 54, 189-201. Burton DR, Montefiori DC. The antibody response in HIV-1 infection. AIDS 1997; 11: S87-S98. Burton D.R., Parren P.W. Vaccines and the induction of functional antibodies: time to look beyond the molecules of natural infection ? Nat.Med 2000, 6:123-125. Clerici M., Barassi C., Devito C., Pastori C., Piconi S., Trabattoni D., Longhi R., Hinkula J., Broliden K., Lopalco L. Serum IgA of HIV-exposed uninfected individuals inhibit HIV through recognition of a region within the alfa-helic of gp41. AIDS 2002,6:1731-1741. Devito C., Hinkula J., Kaul R., Kimani J., Kiama P., Lopalco L., Barass C., Piconi S., Trabattoni D., Bwayo J.J., Plummer F., Clerici M., Broliden K. Cross-clade HIV-1specific neutralizing IgA in mucosal and systemic compartments of HIV-1 exposed, persistently seronegative subjects. J.AIDS, 2002, 30;413-420. Devito C., Levi M., Broliden K., Hinkula J. Epitope-mapping of B-cell epitopes in rabbits immunized with various gag antigens for the induction of HIV-1 gag capture ELISA reagents. J.Immuno1. Methods 2000, 238, 69-80.
238 Devito C. Functional properties of antibodies in resiatance against HIV-1 infection. Karolinska Institutet, Sweden, Thesis 2002. Hinkula J, Svanholm C, Schwartz S, Lundholm P, Brytting M, Engstrom G, Benthin R, Glaser H, Kohleisen B, Erfle V, Okuda K, Wigzell H, Wahren B. Recognition of prominent viral epitopes induced by immunization with human immunodeficiency virus type 1 regulatory genes. J.Viro1. 1997,71,5528-5539. Kaul R, Plummer FA, Kimani J, Dong T, Kiama P, Rostron T, Njagi E, MacDonald KS, Bwayo JJ, McMichael AJ, Rowland-Jones SL. HIV-1-specific mucosal CD8-t lymphocyte responses in the cervix of HN-1-resistant prostitutes in Nairobi. J Immunol2000 Feb 1;164(3):1602-11 Ljungberg K., Rollman E., Eriksson L., Hinkula J., Wahren B. Enhanced immune responses after DNA vaccination with combined envelope genes from different HIV1 subtypes. Virology 2002,302:44-57.. Lopalco L., Barassi C., Pastori C., Longhi R., Burastero SE., Tambussi G., Mazotta F., Lazzarin A,, Clerici M., Sicardi AG. CCR5-reactive antibodies in seronegative partners of HIV-seropositive individuals down-molulate surface CCRS in vivo and neutralize infectivity of R5 strains of HIV-1 in vitro. J.Immuno1. 2000, 164, 34263433. McMichael A.J., Hanke T. HIV vaccines 1983-2003. Science 2003,7:874-880. Muster T., Steidl F., Putscher M., Trkola A., Klima A., Himmler G., Ruker F., Katinger H. A conserved neutralizing epitope on gp41 of human immunodeficiency virus type 1. J.Viro1. 1993,67, 6642-6647. Myers G., Lenroot R. HIV glycosylation: What does it portend i AIDS Res. Human Retrov. 1992, 8, 1459-1460. Putkonen P., Thorstensson R., Ghavamzadeh L., Albert J., Hild K., Biberfeld G., Norrby E. Prevention of HIV-2 and SIVsm infection by passive immunization in cynomolgus monkeys. Nature 1991,352: 434-436. Robinson HL. New hope for an AIDS vaccine. Nat. Rev. Immunol. 2002,2,239-250. Scanlan C.N. et al. The broadly neutralizing anti-human immunodeficiency virus type 1 antibody 2G12 recognizes a cluster of alfa-2 mannose residues on the outer surface of gp120. J.Viro1. 2002, 76, 7306-7321. Schroder U., Svenson SB. Nasal and parenteral immunizations with diphteria toxoid using monoglyceride/fatty acid lipid suspensions as adjuvants. Vaccine 1999, 17, 2096-2103. Shearer G.M., Clerici M. Protective immunity against HIV infection: has nature done the experiment for us ? Immunol. Today 1996, 17-21. Zwick M.B., Labrijn A.F., Wang M., Spenlehauer C., Saphire E.O., Bilney J.M., Moore J.P., Stiegler G., Katinger H., Burton D.R., Parren P.W. Broadly neutralizing antibodies targeted to the membrane-proximal external region of human immunodeficiency virus type 1 glycoprotein gp41. J.Viro1. 2001, 75:10892-10905. Zuber B., Hinkula J., Vodros D., Lundholm P., Nilsson C., Morner A., Levi M., Benthin R., Wahren B. Induction of immune responses and break of tolerance by DNA against the HIV-1 coreceptor CCRS but no protection from SNsm challenge. Virol. 2000,278,400-41 1.
239 ACKNOWLEDGEMENTS This work was supported by research grants from the Swedish Research Council, the Karolinska Institutet Research Fund and the Swedish Medical Society
240 Table 1 : Frequency of serum, rectal and vaginal IgG and IgA responders against HIV-1 antigens in groups of mice intranasally immunized with HIV-DNA alone or with peptide booster and with and without mucosa] adjuvants N3 and L3. No. of mice Groups and immunogens
Adjuvant
No. of
immunizations
Serum ISG rgpl60
gagp24
rgpl60
gagp24
gpieo
Fectal IgA
Serum IgA
Vaginal IgA gagp24 gp!60 gagp24
8 ug HIV-1 DNA gpl60/rev
Saline
8
2
0/8
0/8
1/8
0/8
0/8
0/8
0/8
0/8
8 ug HIV-1 DNA gpl60/rev
N3
6
2
5/6
0/6
6/6
0/6
6/6
0/6
5/6
0/6
80 ug HIV-1 DNA gpl60/rev + gp41 peptides Salme/L3
6
2
6/6
N.A
6/6
N.A
6/6
NA
6/6
NA
8 ug HIV-1 DNA gpl60/rev 1 gp41 peptides
N3/L3
12
2
12/12
N.A
12/12
N.A
7/7
N.A
6/7
NA
10 ug gp41 peptides
Saline
8
2
1/8
0/4
0/8
0/4
0/8
NA
0/8
NA
50 ug gp41 peptides
Saline
5
2
2/5
0/5
1/5
N.A
1/5
NA
0/5
NA
10 ug gp41 peptides
L3
7
2
111
N.A
6/7
N.A
6/6
0/6
6/6
NA
2ugrVLPgag/gpl20
PBS
6
1
0/6
0/6
0/6
0/6
0/6
0/6
0/6
0/6
2ugrVLPgag/gpl20
L3
6
1
0/6
6/6
0/6
6/6
0/6
6/6
0/6
5/6
N3/L3
10
2
0/10
0/10
0/10
0/10
0/6
0/3
0/6
0/3
L3 or N3 adjuvant Abbreviations: N.A = Not analysed.
Table 2: Frequency of HIV-1 neutralizing serum and rectal antibody responders against HIV-1 antigens in groups of mice intranasally immunized with HIV-DNA alone or with peptide booster and with and without mucosal adjuvants N3 and L3. Neutralizing antibodies in serum Groups, doses and immunogens.
No. of mice
HIV-1 Subtype A
HIV-1 Subtype B
Adjuvant
Neutralizing antibodies rectal HIV-1 HIV-1 Subtype C SF2 Subtype B
8 ug HIV-1 DNA gp!60/rev
Saline
8
0/8
0/8
0/8
0/4
0/4
8 ug HIV-1 DNA gpl60/rev NT80%
N3
6
1/6 30
6/6 20-60
0/6 <20
0/3 <2
0/3 <2
80 ug HIV-1 DNA gpl60/rev + gp41 peptides Saline/L3 NT80%
6
N.A
6/6 20-220
N.A
3/3 4-8
2/3 4-5
8 ug HIV-1 DNA gp!60/rev + gp41 peptides NT80%
N3/L3
12
3/6 20-40
6/6 30-120
2/6 20-30
4/4 4-12
4/4 4-8
10 ug gp41 peptides
Saline
8
0/8
0/8
0/8
0/4
0/4
50 ug gp41 peptides
Saline
5
0/5
0/5
0/5
0/4
0/4
10 ug gp41 peptides
L3
7
0/6
0/6
0/6
0/4
0/4
2ugrVLPgag/gpl20
PBS
6
N.A
N.A
N.A
N.A
N.A
2ugrVLPgag/gpl20
L3
6
N.A
N.A
N.A
N.A
N.A
L3 or N3 adjuvant
N3/L3
10
0/3
0/3
0/3
0/2
0/2
Abbreviations: N.A = Not analysed, NT80% reciprocal dilution resulting in 80% reduced HIV-1 p24 antigen production.
241
242
Table 3: Frequency of cell proliferation and interferon-gamma responders against HIV-1 antigens in groups of mice intranasally immunized with HIV-DNA alone or with peptide booster and with and without mucosal adjuvants N3 and L3.
Groups, doses and immunogens. Adjuvant
Cell-mediated immunity T-cell proliferation No. of mice HIV-1 gpieo gp41 peptides
gagp24
IFN-gamma responses HIV-1 gpieo gp41 peptides
gagp24
8 ug HIV-1 DNA gp!60/rev
Saline
8
2/8
0/8
0/8
0/8
0/8
N.A
8 ug HIV-1 DNA gpl60/rev
N3
6
6/6
3/6
N.A
5/6
2/6
N.A
80 ug HIV-1 DNA gp!60/rev + gp41 peptides
Saline/13
6
6/6
2/6
N.A
6/6
2/6
N.A
8 ug HIV-1 DNA gpl 60/rev + gp41 peptides
N3/L3
12
12/12
12/12
0/4
10/12
11/12
0/4
10 ug gp41 peptides
Saline
8
0/8
0/8
0/8
0/8
0/4
N.A
50 ug gp41 peptides
Saline
5
0/5
0/5
0/5
0/5
0/5
N.A
10 ug gp41 peptides
L3
7
4/6
6/6
0/6
1/6
2/6
0/4
2ugrVLPgag/gpl20
PBS
6
0/6
0/6
1/6
N.A
0/6
0/6
2ugrVLPgag/gpl20
L3
6
0/6
0/6
6/6
N.A
1/6
3/6
N3/L3
10
0/6
0/6
0/7
0/3
0/3
0/3
L3 or N3 adjuvant Abbreviations: N.A = Not analysed.
PRE-CLINICAL PRIMATE VACCINE STUDIES RIGMOR THORSTENSSON Swedish Institute for Infectious Disease Control, Stockholm, Sweden AIDS is a serious challenge both in industrialised and developing countries especially in Africa and Southeast Asia. In sub-Saharan Africa, the most affected area with almost 70% of the global total of HIV-positive people, the life expectancy at birth is set to recede to levels seen half a century ago. According to UNAIDS and WHO there were 42 million people, including 3.2 million children, living with HIV infection at the end of year 2002, 5 million of which were infected during the last year. The global AIDS epidemic is caused by different HIV-1 subtypes (designated AJ), which differ from each other in about 30% of their genetic sequence. Subtype B dominates in North America, Australia and Europe and is therefore the most studied of all subtypes (1). However, subtype C is the most widely spread subtype globally. In Africa all subtypes occur, but subtypes A, C and D are the most prevalent (2). Anti-retroviral therapy has changed the course of HIV disease in developed countries. However, therapy is not usually available in developing countries and it is not useful for prophylaxis. Vaccination is therefore necessary, in order to decrease the spread of HIV infection. It is also a means of increasing the immunity of those already infected during anti-retroviral treatment, so called immunotherapy. Non-human primate models are important to bridge the gap between concept and practice by helping to prioritise between different experimental immunogens and to predict the relative protective efficacy of different candidate immunogens (3). The only available HIV-1 challenge model is the HIV-1 chimpanzee model, but it is considered as a less suitable model due to several disadvantages: an endangered species, extremely expensive, and rarely develops full-blown AIDS. Simian immunodeficiency virus (SIV) induces a disease in monkeys that is similar to AIDS in humans. Monkeys can be infected by intravaginal, intravenous, intrarectal and oral routes thereby mimicking the principal routes of HIV infections in humans. The incubation period and severity of the disease correlate with the viral load in a way that is similar to the disease in humans (4). The most effective HIV vaccine should protect against both cell-free and cellassociated virus and against virus on mucosal surfaces as well as systemically disseminated virus. Therefore an ideal HIV vaccine should induce both humoral and cellular immunity both systemically and mucosally. A number of vaccine studies in monkey models have failed to identify a clear correlate of protection, but in recent experiments a correlation between the breadth and strength of the immune responses and the protective efficacy has been demonstrated. HIV sub-unit vaccine candidates stimulate good antibody responses but are poor at initiating or boosting cytotoxic T-lymphocytes (CTL), whereas live vector vaccines often generate CTL responses but low antibody responses. The so-called prime-boost vaccine regimen uses two candidate vaccines with complementary immunogenicity profiles, thus inducing both cellular and humoral immunity (5-7; own data). The extensive genetic diversity of HIV is often viewed as a serious banier to the development of an AIDS vaccine. Especially of concern is the lack of vaccineinduced antibodies to HIV, which are able to neutralize primary HIV isolates. The viral proteins most commonly used for immunizations of nonhuman primates or humans have been envelope glycoproteins. Envelope subunits given alone (in an adjuvant) have been shown to induce protective immunity against non-
243
244 pathogenic challenge virus in the HIV-l/chimpanzee model and in the HIV2/macaque and SHIV/macaque models but not against pathogenic SIV challenge in macaques. A limitation with the use of gp120 subunits as vaccines alone or in primeboost regimens, is the failure of these subunits to induce cross-neutralizing antibodies against primary HIV-isolates. However, there are some recent reports of new candidate vaccines based on HIV-1 envelope-CD4 receptor complexes or the CD4 receptor-co-receptor complex for HrV that have been able to induce broadly neutralizing antibodies against primary HIV isolates in experimental animals (8, 9). Live attenuated vaccines have generally been the most successful approaches in immunizing against viral-induced disease. We, and others, have been able to demonstrate vaccine-induced complete protection against SIV infection (so called sterilizing immunity) or reduction of virus production and prevention of SIV associated disease by use of different live attenuated vaccines (5; own data). Live recombinant virus-based vaccines have the potential of being useful HIV vaccines since they mimic live attenuated vaccines in inducing both humoral and cellular immune responses (5-7; own data) but do not have the safety risks of live attenuated vaccines. Several vaccine trials in macaques as well as in humans (phase UII), using priming with live recombinant vaccine and boosting with viral proteins, have shown that the prime-boost approach enhanced the immune responses relative to the use of either component alone. Enhanced immune responses have also been demonstrated by the use of other prime-boost combinations, i e viral DNA followed by a live recombinant vaccine or a combination of two live recombinant vaccines based on different viral vectors. Furthermore, studies of vaccine-induced protection in macaques have shown a higher protective efficacy by the use of prime-boost combinations. Poxviruses, including vaccinia, attenuated vaccinia and canarypox have been the most commonly used live viral vectors in HIV/SIV vaccine studies (6). Other viral vectors used in HIV/SIV vaccine studies comprise Semliki Forest Virus, Venezuelan equine encephalitis virus, adenovirus, polio virus and hepatitis virus. For a vaccine to be effective against HIV-1 infection in man, it will have to protect against mucosal challenge since the most common route of HIV transmission is through sexual exposure. We, and others, have shown protection against mucosal (rectal or vaginal) SIV challenge in macaques vaccinated with live attenuated virus (4). Control of infection with homologous SIV or chimeric SIVMIV-1 (SHIV) inoculated intrarectally has also been demonstrated in a proportion of macaques immunized with live recombinant SIV vaccine based on an attenuated vaccinia vector (MVA or Nyvac) or immunized with a DNA based vaccine followed by a MVAbased vaccine (10, 11). In our most recent experiment macaques systemically immunized with a combination of SFV-HIV-l/SIV and MVA-HIV-l/SIV and challenged intrarectally with SHIV showed a significant reduction of the plasma viral load compared to the controls. Although the non-human primate models have provided information that has significantly advanced vaccine design, only comparison with the outcome of clinical phase I11 trials in humans will elucidate the accuracy of the experimental models (Table 1). The first phase I11 trials for any vaccine candidate, bivalent subtype B and subtype B/E envelope glycoprotein are ongoing in North America, Europe and Thailand among HIV-negative volunteers with high-risk behaviour. Initial results presented on 24 February 2003 showed that the vaccine, although safe, did not prove effective in the trials in North America and Europe (12). More than 150 completed
245
phase I and I1 vaccine trials using approximately 50 different HIV-1 vaccine candidates, presented as live poxvirus vectors, recombinant protein subunits, synthetic peptides, plasmid DNA or a combined prime boost regimen, have shown that the vaccines are safe and well tolerated (13).
Table 1. Only phase I11 trials in humans can tell the real importance of animal models. HIV-infection in humans
Animal models
Main route of transmission is by sexual intercourse
Few studies with mucosal challenge Limited knowledge about local immune
responses Low virus doses (0.1 - 1% transmission rate)
High doses of challenge virus (100% transmission rate)
Different subtypes in different geographical regions
Often homology between vaccine and challenge virus Limited knowledge about cross-protection against different subtypes
REFERENCES 1.
UNAIDS, WHO: AIDS Geneva:UNAIDS/WHO, 2002.
2.
Peeters M. and Sharp PM: Genetic diversity of HIV-1: the moving target. AIDS 14 (SUppl3): S129-S140,2000.
3.
Nathanson N., Hirsch VM., Mathieson BJ. The role of nonhuman primates in the development of an AIDS vaccine. AIDS 13(Suppl A): S113-S120, 1999.
4.
Whetter LE., Ojukwu IC., Novembre FJ., Dewhurst S. Pathogenesis of simian immunodeficiencyvirus infection. J Gen. Virol. 80:1557-1568, 1999.
5.
Letvin NL., Barouch DH., Montefiori DC. Prospects for vaccine protection against HIV-1 infection and AIDS. Annu. Rev. Immunol. 20:73-99, 2002.
6.
Robinson HL. New hope for an AIDS vaccine. Nature Reviews Immunology 2: 239-250,2002,
7.
Mascola JR. and Nabel GJ. Vaccines for the prevention of HIV-1 disease. Current Opinion in Immunology 13: 489-495,2001,
8.
Fouts T., Godfiey K., Bobb K. et al. Crosslinked HIV-1 envelope-CD4 receptor complexes elicit broadly cross-reactive neutralizing antibodies in rhesus macaques. Proc. Natl Acad Sci 99:11842-11847,2002.
9.
Wang CY., Shen M., Tam G. et al. Synthetic AIDS vaccine by targeting HIV receptor. Vaccine 21 :89-97,2002,
epidemic
update:
December
2002.
10. Benson J., Chougnet C., Robert-Guroff M., Montefiori D., Markham P., Shearer G., Gallo RC., Cranage M., Paoletti E., Limbach K., Venzon D., Tartaglia J.,
246 Franchini G. Recombinant vaccine-induced protection against the highly pathogenic simian immunodeficiency virus SIV(mac251): dependence on route of challenge exposure. J. Virol. 72:4170-4182, 1998.
11. Amara RR. Et al. Control of a mucosal challenge and prevention of AIDS by a multiprotein DNPJMVA vaccine. Science 292: 69-74, 2001. 12.
VaxGen, Inc. Investor Relations, http://www.vaxgen.com
13. IAVI
Database
of
Preventive AIDS
httu://www.iavi.orrr/trialsdb/basicsearchform.asu
Vaccines
in
Human
Trials.
PREPARING FOR PHASE I/II HIV VACCINE TRIALS IN SOUTH AFRICA AND PLANNING FOR PHASE I11 TRIALS
DR. EFTYHIA VARDAS Specialist Clinical Virologist and Director, HIV AIDS Vaccine Division, Perinatal HIV Research Unit, University of the Witwatersrand, Chris Hani Baragwanath Hospital, Soweto, South Africa ABSTRACT The Perinatal HIV Research Unit (PHRU) recently established an HIV AIDS Vaccine Division (HAVD) in order to test multiple phase I/II HIV vaccine candidates as safely and as quickly as possible, in strict adherence to the International Conference on the Harmonization of Good Clinical Practice (ICH GCP) and the South African Department of Health GCP guidelines. The purpose of testing multiple phase I/II candidates is to quickly identify the most suitable vaccine candidates, with good safety profiles and promising immunogenicity, which must be tested without delay in large scale phase I11 efficacy studies in South Africa in order to expedite the availability of a licensed preventative HIV vaccine, which is an important aspect of controlling the HIV epidemic in this country. Conducting multiple phase I/II studies requires the participation of several hundred, low-risk HIV negative, healthy volunteers and many ethical issues need to be addressed during the process to achieve this. An important ethical issue is the potential exploitation of communities and previously disadvantaged groups in South Africa participating in medical research, therefore it is essential to establish mechanisms to ensure transparent researcher-community-research participant interactions and provide a fully informed, low risk, HIV-negative cohort willing to be enrolled in specific phase I/II HIV vaccine trials. Potential participants and communities that are fully informed, in a transparent manner, about the risks and benefits of participating in HIV vaccine trials can make their own decisions about trial participation without fear of undue coercion and exploitation by researchers and vaccine manufacturers. A specific “prescreening protocol” was designed by PHRU-HAVD to channel willing HIV negative clients from a free adult voluntary counseling and testing (VCT) service into a structured programme called the “prescreening protocol”, which is not HIV vaccine trial specific but is aimed at accumulating a cohort of potential volunteers for specific phase I/II trials as they become available. This recruitment protocol ensures that all potential HIV vaccine trial participants have the opportunity to lower their risk for acquiring HIV and other sexually transmitted diseases based on the advice received during risk reduction counseling sessions. In preparing for Phase I11 efficacy testing of promising HIV vaccine candidates, ethical issues concerning the potential cohorts that may be used and how efficacy will be established have arisen. Of great ethical importance is the proposed use of occupational cohorts, like gold miners that have historically been exploited, the participation of adolescents, the primary target group for an HIV vaccine and eventually also younger children and the equal distribution of men and women and individuals of all races in HIV vaccine research.
247
248 BACKGROUND The Joint United Nations Programme on HIV/AIDS (UNAIDS) estimates that more than 60 million adults and children were living with HIV/AIDS in 2002, and that about 20 million people have already died of AIDS'. Sub-Saharan Africa is most severely affected and within the Southern African Development Community (SADC) region there are approximately 24 million people living with HIV/AIDS. More specifically, in South Africa there is a significant burden of HIV infection with approximately 4.5 million people, 1 1.4% of all South Africans, living with HIV/AIDS with higher prevalence rates in young women reflecting their increased vulnerability to HIV infection'. In South Africa, there is no access within the public sector to effective and life saving antiretroviral drugs for infected individuals and despite ongoing preventative measures like intensive HIV education programmes and campaigns and freely available condoms there appears to be little impact in decreasing the number of infected individuals. Therefore, in the face of few affordable and effective long-term options to control the growing burden of HIV infection, a safe and effective preventative HIV vaccine remains an attractive goal for South Africa. However, the time-lines for development and availability of an effective HIV/AIDS vaccine for Africa remain unclear and distant even though, globally, over 80 candidate HIV vaccines of various designs, depending on different modes of inducing immune responses in recipients, containing different adjuvants and with different inserts are currently being tested in Phase I and I1 trials3. South African researchers have as yet to test a candidate HIV vaccine in Phase I, even though the infrastructure and human resources required for these trials have been established. There are many different reasons for this delay in testing HIV vaccines in Africa, however, the longer these delays remain unresolved, the longer it will take for an effective HIV vaccine to be found and more people will die from AIDS. The simplest issue to address has been to build up the relatively undeveloped clinical and laboratory infrastructure necessary to conduct highly stringent Federal Drug Administration (FDA) level phase VII HIV vaccine trials. In the past two years in South Africa, at least three potential phase I/II HIV vaccine testing sites have been developed with donor funding in already existing research structures in three provinces of South Africa (Gauteng, KwaZulu-Natal and Western Cape)4. The placement of the sites was very dependant on whether existing research experience and infrastructure, including clinical and laboratory capability already existed at the sites. The greatest barriers that remain for HIV vaccine research are due to the extraordinary and inter-linked community, ethical, political, regulatory and scientific challenges associated with phase 1/11 and phase I11 HIV vaccine trials. Many of these challenges are the same for all HIV vaccine research whether they are early safety and immunogenicity trials involving hundreds of healthy, low-risk individuals or large scale, community based trials with high-risk individuals, however important differences will be highlighted and discussed separately. A critical aspect necessary for successfully completing HIV vaccine work, and to ultimately achieve an effective HIV vaccine for South Africa, is the recruitment of potential volunteers prepared to participate in this type of research. HIV vaccine trials can only be conducted in communities that thoroughly understand the issues around HIV
249
vaccine research and feel empowered to become actively involved in all aspects of this research. Forming, strengthening and maintaining relationships between HIV vaccine researchers and the communities they work in are essential to ensure true community participation in future HIV vaccine trials’. Communities are no longer seen as mere sources of volunteers, but rather as equal partners that can contribute to successful HIV vaccine research design and implementation. True community participation in research involves the creation of democratic systems that enable affected communities to become full collaborative partners in the research process. An open dialogue and formation of a formal democratically elected and representative “mediating” structure like a Community Advisory Board (CAB) that functions as a conduit between the researchers and communities is a very effective way to ensure that communities and individual research participants are fully informed when making the decision to participate in HIV vaccine research. In Soweto, initial activities to inform the community about HIV vaccines and the types of trials required to test them and prove their efficacy was done with a series of structured community outreach workshops of approximately 60-80 representatives of different governmental and “non governmental” organisations based in Soweto. Initially four community based HIV workshops were held in late 2001, however these events are now held annually to further increase the community understanding of HIV vaccine research. The first community workshop series was also used to help identify and establish a specific “vaccine orientated” CAB and a democratic election of 21 community members representing the youth, military and correctional services, traditional healers, church groups, health care workers and teachers, was established to be the CAB for a term of 2 years at the last workshop in 2001. It is envisaged that as the two year term of the first CAB comes to an end, at the end of 2003, a similar format will be used to elect new CAB members. An important lesson learnt from this HIV Vaccine workshop information strategy was the clear interest and need for disseminating information regarding HIV vaccines in Soweto. HIV vaccine research has been at the forefront of transforming the conventional researcher-study participant and community relationship in South Africa. The Soweto HIV Vaccine CAE3 have been directly involved in the key debates in developing country policy and approaches to phase VII HIV vaccine trials. Many of these are key issues surrounding the benefits (undue coercion; either financial or access to antiretroviral drugs) and risks (inter-current HIV infection and other serious side-effects) for individuals participating in HIV vaccine trials. Of particular concern to the CAB has been the delay and apparent slowness imposed on the research by the Medicines Control Council (MCC) and related regulatory process. The next important challenge was to establish a recruitment strategy in order to be able to accumulate, recruit and maintain well informed, suitable HTV negative volunteers from the community and guarantee their follow up over reasonable periods of time for phase VII HIV vaccine trials. Previously, exploitation of communities that provided most research participants occurred, especially when illiterate communities were approached to join research activities without true understanding of the consequences of this participation. Therefore a specific, innovative strategy had to be designed and employed to identify well-informed, suitable volunteers for HIV vaccine trials in Soweto. At HAVD this took the form of specifically designing, funding and implementing an investigator driven study called the “Prescreening Protocol”. The logical entry point for
250 HIV negative potential volunteers into the “prescreening protocol”, as in other African countries conducting HIV vaccine research, is from adult voluntary counseling and testing (VCT) centres6. Previously, VCT services provided by the PHRU were targeted to provide a service for pregnant women attending antenatal clinics as part of research to prevent mother to child transmission of HIV with the use of antiretroviral therapy. Therefore it was necessary to create a separate VCT service to allow men and women equal access to free HIV counseling and testing. So an adult VCT service was established linked to the existing HIV Vaccine clinic in February 2002. Posters and flyers advertising the service were distributed at bus and taxi ranks near the hospital and a large mural was painted on a wall outside the hospital compound. It took some time for the service to become known, and initially on average only about 58 clients accessed the free VCT per month. In 2002, from February to December a total of 643 clients were seen. By early 2003, news of the service had spread and the average number of clients attending had increased to 131 per month, so that from January to the end of May a total of 655 clients have already been seen. The demographics of clients accessing the VCT were also very interesting. Mainly young men between the ages of 25-35 years that are Soweto residents used the VCT service. The HIV prevalence at the VCT clinic, which obviously does not reflect the community prevalence, is 40%. All individuals found HIV positive during VCT are referred to the PHRU support groups and “Wellness Clinic” which although it does not provide antiretrovirals, can provide basic health care, screening for TB and antibiotics for the prevention of opportunistic infections. All HIV negative individuals are offered the chance to join the “prescreening protocol”. It is very carefully explained that this is not a vaccine trial, but a protocol to inform people, screen them and make sure that they are ready physically and mentally to enter an HIV vaccine trial if eventually they so wish. On average about 27 HIV negative individuals agree to join the prescreening protocol per month. These people are given appointment dates to return for the information visits of the protocol called “vaccine discussion groups”. Through the prescreening protocol, a prospective cohort study whereby HIV negative, healthy, well-informed, low-risk persons from South Africa who are eventually prepared to enter a specific HIV phase VII vaccine trial is conducted. The protocol is currently allowed to progress over a 36 to 48-month accrual period. Each participant must attend for a minimum two months of follow-up that includes at least two clinical and two non-clinical visits. Full participation in the cohort includes up to 14 different contacts with the research facility; 6 clinical and 8 non-clinical visits or VDG visits. At the start of the protocol, participants complete an assessment of understanding questionnaire (AOUQ) and a lifestyle questionnaire. Results of the lifestyle questionnaire are used to give tailor-made risk reduction counseling for each individual. The information given to participants during the VDG’s is in a “support-group’’ like structure and a fixed curriculum covering the basics of HIV and HIV vaccines, the risks and benefits of participating in this research, the participants bill of rights and the specifics of the HIV trials are all covered during these sessions. At the end of the 8 VDG sessions another AOUQ is administered to assess the current level of understanding of the potential participant. Participants must score 100% to be allowed to enroll in an HIV vaccine trial. During the prescreening protocol clinical visits, individuals are screened for HIV, other sexually transmitted diseases (hepatitis B, syphilis, gonorrhoea, chlamydia), pregnancy and indicators of general health (liver function, full blood count and kidney function).
251
Ultimately only healthy, low risk, HIV negative volunteers can be enrolled in HIV vaccine trials, so participant review from the prescreening protocol occurs every 2-3 months and individuals can be appropriately referred out of the protocol if necessary. From December 2001 when the first participants started to enter the prescreening protocol until the end of June 2003, 240 people have been recruited of which 179 have completed (74.5% retention rate) and 122, i.e. 68%, are eligible for vaccine trial participation. The attrition rate from VCT to potential suitable participants for HIV vaccine trials is considerable with only 50% of individuals identified as H N negative agreeing to be part of the prescreening protocol, and eventually only two thirds of these individuals actually continuing with the process and being found suitable to enter into a vaccine trial, if they still want to. The attrition from eligible to actual participants is still not known and will be apparent as enrollment for HIV vaccine trials begin in September 2003. It is essential for South Africa to start planning and to develop the appropriate capability to test future preventative HIV vaccines that have gone through preliminary phase I and I1 testing in phase I11 trials. Currently there is only one established phase I11 testing site in South Africa (the MRC-Africa Centre-University of Natal Hlabisa site). However, microbicide research has already started in this community thus limiting the eligible cohorts for Phase I11 vaccine efficacy trials. So many more sites will be required if South Africa is to be well prepared to do large-scale efficacy trials with favourable vaccine candidates as efficiently, quickly and as safely as possible. Planning for phase I11 trials is an essential part of the national HIV vaccine plan and will require collaboration within the Southern Africa region between multiple study sites in order to achieve the large numbers of individuals or volunteers that will be necessary for these large-scale phase I11 studies needed to prove vaccine efficacy. South Africa offers several advantages that will enable phase I11 studies to be done quickly and efficiently, including higher HIV incidence rates in parts of the country that will greatly facilitate trials to evaluate the protective efficacy of HIV vaccines and access to a variety of cohorts especially if it is necessary to evaluate vaccine efficacy against different routes of transmission (heterosexual and perinatal). Expanding the network of potential phase I11 sites has already started, with formal networks of demographic surveillance system (DSS) sites located in Africa and Asia, known as the INDEPTH (International Network for the Demographic sites) Network, are joining up to conduct future vaccine and related clinical and community trials. This system will increase phase I11 capability in many developing countries of the world, especially in Africa where the majority of the INDEPTH sites are based. South Africa has also started to explore the various ethical challenges of conducting phase I11 HIV vaccine trials. The constant question from CAB and community members is how will HIV vaccine efficacy be proven and a lot of effort needs to be made to dispel the misconceptions that exist within theses communities that people will be exposed to H N on purpose. Other phase I11 initiatives using occupational cohorts based on gold mines in South Africa have also started and the use of occupational cohorts, especially in mines, which historically have been associated with exploitation of workers, is an important ethical and moral challenge. Finally, there has been considerable debate in South Afiica about the use of adolescents in phase I11 trials. The proponents that argue for the use of adolescents in HIV vaccine trials base their arguments on the fact that this
252 is the group that will, eventually, be the prime target of an HIV vaccine and that they are physiologically different from adults. People arguing against the use of adolescents in phase I11 trials give evidence that there are no physiological differences, there are many problems with adolescents consenting themselves to participate in HIV vaccine research and that there has been no precedent with other viral vaccines to show that adolescents respond differently to them. As South Africans embark on testing HIV-preventative vaccines in phase HI, there are still many important issues and challenges that must not be forgotten. The issue of “therapeutic” HIV vaccination is very important in a country with 4.5 million infected people and must be pursued. HIV vaccine access, once licensure has been granted, must be ensured in the countries that most require it. The tragedies of expensive and inaccessible antiretroviral drugs in the developing countries of the world with the most need must not be repeated. REFERENCES
1.
UNAIDS 2002. Report on the global HIV/AIDS epidemic. Geneva. UNAIDS. UNAIDS 2001.
2.
Human Sciences Research Council 2002. Nelson MandeldHSRC Study of HIV/AIDS. South African National Behavioural Risks and Mass Media, Household Survey 2002. Cape Town. Human Sciences Research Council 2002.
3.
HVTN 2003..http:\\www.org.nt
4.
Vardas E, Slack C. Getting HIV vaccine trials started in Soweto. Continuing Medical Education, September 2002,20(9); 593-594.
5.
Vardas E, Mogale M, Maketla M, A Mafokhola, Mntambo M, Gray G. The HIV vaccine community outreach programme and the establishment of a dedicated vaccine community advisory board (CAB) in Soweto, South Africa. International AIDS Conference, Poster Number ThPeD7712, Barcelona. 2002.
6.
Kaleebu P. Experiences and results of Uganda’s first HIV vaccine trial and preparations for the second. Retroviruses of Human AIDS and Related Animal Viruses. XIII” Cent Gardes Symposium. Paris. Nov 2002.
9.
WATER CONFLICTS
This page intentionally left blank
THE POLITICS OF WATER FARHANG MEHR University of Boston, Boston, USA The scarcity of water at a global level is becoming a planetary emergency. Population growth, urbanization, intensive cropping, the diversion of traditionally shared waters as well as the drainage and construction of dams on shared rivers by upstream countries have all contributed to the unfair distribution of water and thus increased the political tensions in many regions. Forty percent of the world’s population faces a water shortage. Currently, the World Bank lists 102 countries as having a water shortage crisis; of these 22 are listed as severe, the remainder as serious. The crisis exists because of a gradual increase in the demand for water. Eighty five percent of the water is allocated for agricultural and industrial usage. The consumption of water in the developed world is ten times more than that consumed in the developing world. Compounded with the resulting famine, and hygiene-related illnesses, the cost of the crisis is further magnified in the poor countries. To date no international mediator exists to regulate a fair and equitable division of water, nor do adequate international laws regulating equitable division of rivers by adjacent countries exist. Aside from the question of boundaries and the right of navigation, the issue of control and disposition of the waters of international rivers is closely tied to the water shortage crisis. An international river is a watercourse that runs through more than one territorial jurisdiction, or forms the boundaries between two or more states. In most cases, discord results when both the upper and lower riparian states of an international river claim fill control and absolute jurisdiction over the disposal of water. Two alternatives exist for resolving this conflict of interest: (1) subject the control and disposal of the water to a community of riparian states or countries through which the river flows or (2) create an international regime which provides the laws and the effective means for their enforcement. It is in the Middle East and Northern Africa that the implementation of these alternatives could be most effective. The water crisis in the Middle East is alarming. Nearly every leader in this region has warned that solving the water problem is essential to the tenuous peace in the Middle East. In a region already marred by numerous territorial disputes, armed confrontations,political differences and religious enmity, their concerns that a water shortage will spark more frequent and escalated clashes is well founded. The present conflict over control of the Nile River is one example. The Nile River originates from two sources: the Blue Nile River flows from Lake Tana in Ethiopia while the White Nile River flows ffom Lake Victoria in Uganda. The White Nile is 5,584 lun long; the Blue Nile flows a distance of 1,529 km from its source in Lake Tana. Both rivers joint in Khartoum, Sudan and then flow on to Egypt, which is exceptionally arid. Approximately 85% of the water that Egypt consumes annually for agriculture and otherwise originates ffom the Blue Nile River while the remainder comes from the White Nile. Concern for the free flow of the Nile River has therefore shaped Egypt’s policies towards its neighbors. In fact, Boutros Boutros Ghali, when serving as Egypt’s foreign
255
256 minister, said, “The national security of Egypt is in the hands of eight other African countries in the Nile basin.” The extent of this precarious position has become apparent with Ethiopia and Sudan’s plans to build hydroelectric dams on the Nile that would affect the flow of water to Egypt. Within Ethiopia, the Blue Nile is 960 km long and has an annual flow of approximately 55 million cubic meters, constituting the major portion of the Nile. In fact, over an entire year, about 86% of the Nile’s waters originate from Ethiopia; in contrast, the White Nile contributes only 14%. While Egypt, Ethiopia and Sudan recognize the international character of the Blue Nile, they have not come to an agreement on their equitable use of the water. The control and the disposal of the waters of the Blue Nile between Egypt and Ethiopia has been an ongoing issue since 1902. Acting on behalf of Egypt, Great Britain signed a treaty with Ethiopia in 1902 whereby the latter agreed to refrain from constructing any device that would obstruct the flow of waters into the Nile, without the prior consent of ,Britain and Egypt’s southern neighbor, Sudan. However, the AngloEthiopian Treaty was never ratified by the British Parliament or by the Ethiopian Crown Council. Since 1956, Ethiopia has claimed that its plans to build dams on the Nile are not in violation of any outstanding international treaties. The Egyptian government, arguing otherwise, most recently stated in 1991 that it was ready to use force to protect its access to the Nile. Over the years, the Ethiopian government has argued that the 1929 Anglo-Egyptian Agreement has no legal basis to control its actions. The 1929 Agreement stipulated that “no irrigation or power works or measures are to be constructed or taken on the River Nile or its tributaries or on lakes from which it flows in so far as all these are in the Sudan or in countries under British administration, which would entail prejudice to the interest of Egypt.” According to Ethiopia, the Agreement, although limiting actions of the Sudan, and British colonies, does not prevent Ethiopia who was never subject to “British administration” from building structures on the Nile River. Furthermore, even if Ethiopia were subject to the agreement, it could declare the agreement void since it preserves and promotes Egypt’s rights and interests without providing any reciprocal consideration. In addition, Ethiopia has argued that it has a right to exploit the waters in its own territory for development purposes, much as Egypt has done with the building of the High Aswan Dam. Ethiopia chose not to recognize the Agreement of 1959 between Egypt and Sudan on the division of the Nile. The Agreement gave Egypt 75% of the waters of the Nile (55.5 billion cubic meters) and 25% to the Sudan (18.5 cm3 billion). In 1959, the Sudan agreed to the building of the Aswan Dam by Egypt and Egypt agreed to the building of a dam by the Sudan on the Nile. The 1959 treaty stated that Sudan and Egypt had “acquired rights” in the Nile water and refers to them as having “full control of the river.” The High Aswan Dam was completed in 1971, despite Ethiopia’s protests. It enabled Egypt to significantly increase its crop production and the country’s agricultural yield. The High Aswan Dam also enabled Egypt to use far more of the annual flow of the Nile River (approximately 80 cubic kilometers) than any of the other eight nations along the Nile River: Sudan, Ethiopia, Uganda, Tanzania, Rwanda, Burundi, Kenya and the Congo. Since as early as 1927, and again in the mid-fifties, Ethiopia has explored the possibility of building a dam on Lake Tana, a project that has taken on a greater urgency in recent years.
257 Egypt, Ethiopia and Sudan all face rapid population growth and a greater projected increase in their need for water. The population of Egypt, which grows by more than one million annually, has been projected to reach 85 million by the year 2015. Projected water deficits threaten Egypt’s agriculture and industry. The growth of Islamic radicalism in Egypt and Sudan makes a water shortage, in both Egypt and the Sudan, a further threat to peace and security in the region. Ethiopia now has a population nearly the size of Egypt’s. It is predicted that by 2025, Ethiopia’s population could be 112 million, double its current level. Also, the droughts of the 1970s and 1980s that repeatedly struck Ethiopia, causing great loss of life and considerable poverty, have emphasized the need for Ethiopia to take remedial measures and ensure that it is able to feed its population. Ethiopia began the first phase of building its $300 million Tana Beles project in 1988. The project aimed at doubling Ethiopia’s hydroelectric power and providing for irrigation by a plan to take water from Lake Tana to the Beles River over which five dams were to be built and resettle approximately 200,000 farmers. However, Ethiopia was unable to receive the required loan from the African Development Bank, some argue, because of Egypt’s interference. In August 2002, Ethiopia began construction on a $224 million hydroelectric dam in a joint venture project with the China Natural Water Resources Hydropower Engineering Corporation. The dam will be 185 meters high (10 meters higher than China’s controversial Three Gorges Dam project) and is expected to take five years to complete. The Tekeze Dam (built on the Tekeze River, a tributary of the Nile flowing into Eritrea) will help to irrigate large tracts of land in northern Ethiopia. Sudan is also preparing to build a $1.73 billion dam on the Nile, in the country’s northern region in order to reduce flooding and to provide more electricity. The Hamdab Dam in Merowe, north of Khartoum, will triple electricity production in Sudan. The dam, which is expected to be completed in six years, will have a capacity of 1,250 megawatts. By supplementing the 250 Mw generated by the smaller Rosairis dam built by Sudan on the Nile, the Hamdab Dam will increase power production significantly. In 1999, Egypt also began digging a canal, ironically called the Salam or Peace Canal, aimed at carrying 12.5 million cubic meters a day of fresh Nile water into Northern Sinai by traversing the Red Sea and the Suez Canal. The $1.4 billion project will enable Egypt to irrigate 400,000 acres of new farmland and open up a region populated by approximately 300,000 people to three million people. Ethiopia and Sudan have voiced their opposition to this project. In the absence of a treaty, there is no rule of international law to obligate a riparian state to obtain consent of other riparian states in changing the course or quantity of water by building a dam or canal. Nor is there any law restricting the upstream riparian state from building dams or a hydroelectric power plant on the river to the detriment of the downstream riparian state. A legal body with enforcement machinery is needed to resolve these issues, monitor compliance with treaties and prevent the countries from entering armed conflicts over water. Although the median line on the Hirmand River has never been disputed as part of the border between the riparian states, the respective rights of both Iran and Afghanistan over the disposal of its water has been a matter of dispute for over 120 years. The Hirmand River originates in Afghanistan and empties into the Hamoon Lake, located in the Sistan region of Iran. The economy of the Zabol, the capital of Sistan, depends solely
258 on this lake and on the water of the Hirmand River. Prior to the formation of the independent state of Afghanistan, the control of distribution of the water of the Hirmand River was not a subject of dispute. In 1872, the British General Goldsmid, arbitrating the Iran-Afghanistan border included the following in his award: “The parties should not resort to any act that would result in the disruption of the water presently used by the other side”. The status quo was in essence endorsed. Estimated statistics indicate that two thirds of the water at that time was used by Iran. In 1909, G.P. Tate, a British officer, reported that the area of the Hamoon Lake was 150,OO square miles and the water input by the Hirmand River at its high, no less than 20,000 cubic feet per second. In 1905, the British Colonel Henry McMahon, following a dispute over water in 1903, changed General Goldsmid’s award of 1872. He ruled two thirds of the Hirmand’s waters as belonging to Afghanistan, and the remaining third to Iran. Iran objected and asked for another arbitration that never came to fruition. In a subsequent treaty, the two countries in 1939 agreed to the equal division of the waters. Between 1945-1949, Afghanistan constructed several dams and diverted canals on the Hirmand, thus reducing the amount of water entering the Hamoon Lake. In the midst of the Cold War and concerned with maintaining stability in the politically strategic region, the United States, in 1948, mediated a new agreement between the two countries. The agreement although not formally accepted by either side, became the de facto agreement between them. Upon coming to power in the 1990s, the Taliban government of Afghanistan disregarded this de facto agreement. Four years of drought in conjunction with an active policy by the Taliban of diverting the rivers flow into Afghanistan, has dried up the Hamoon Lake, ruined an ecological habitat and resulted in mass population migration from the area. The water crisis in the Middle East is not limited to Iran and Afghanistan. After the signing of the 1979 peace treaty with Israel, President Anwar Sadat of Egypt predicted that the next war, should it erupt, would be over water. King Hussein of Jordan and Prime Minister Yitzhak Rabin of Israel expressed similar concerns with Rabin stating, “If we solve every other problem of the Middle East, but do not satisfactorily resolve the water problem, our region will explode. Peace will not be possible.” A principal reason for Israel to prolong the occupation of the Palestinian lands, both the Gaza strip and the West Bank, is linked to the use of underground water in the area and control of the Jordan River. The main water sources for Israel, Palestine, Jordan, and Syria, are the Jordan River, the Yarmuk (a tributary of the Jordan River), and indirectly the Euphrates, as well as underground sources. The salty water of the Sea of Galilee is unsuitable for agriculture. The Jordan and Yarmuk rivers supply 40% of Israel’s fresh water, 70% of which is used for agriculture. Eighty percent of the renewable water resources of the mountain aquifers in that region are exploited by Israel. The controversial truce between the Arab states and Israel, after the creation of the later state in 1948, did not create any arrangement for the use of shared waters in the region. Numerous confrontations arose between Israel and her neighbors. In response to the 1951 Syrian-Israel border clashes over water, the Eisenhower administration proposed a plan for an equitable distribution of the water resources. The Arab states rejected the plan arguing that Israel should not be entitled to 33% of flow of the Jordan River when only 23% originates in Israel.
259
In 1956, Israel adopted the National Water Carrier Project or Jordan Project. Completed in 1964, the project linked the regional water projects throughout Israel and transported water from Lake Kinneret in the north to the dry Mitzpe Ramon in the south. With tensions between Israel and the Arab states mounting, the Arab states decided to divert the Jordan River’s water at the source thus depriving Israel of the River’s water. The Israel air raids of Syria came in April 1967, followed by the Six Day War two months later. The conflicts were based on territorial and water claims, a fact that was confirmed by Ariel Sharon who has said: “People generally regard June 5, 1967, as the day the Six Day War began. That is the official date, but in reality it started two and a half years earlier on the day Israel decided to act against the diversion of the Jordan River.” This conflict played out among the Arab states as well. In 1975, Syria and Iraq nearly went to war over the Euphrates River. The building of the Keban Dam in 1965 in southern Anatolia by Turkey and the Tabqa Dam by Syria in the mid-1960s and early 1970s significantly increased the tensions in the region. Only half of Syria’s land is arable and only about 30% of that half is under cultivation. Half of Syria’s water is supplied by the Euphrates and the rest from other rivers and underground resources. Although bilateral and tripartite meetings had taken place among the three riparian states since the mid-l960s, with occasional Soviet involvement, no formal agreements had been reached by the time the dams were filled. Although Syria in 1974 agreed to an Iraqi request that Syria allow an additional flow of water from the Tabqa Dam, by 1975 the amount of the water flow was in dispute and Iraq asked the Arab League to intervene. A technical committee was set up by the Arab League to mediate the conflict but Syria, after defending its actions, pulled out of the meetings and closed its airspace to Iraqi flights. Both Iraq and Syria were reported to have transferred troops to their mutual border. Through Saudi mediation, the parties were able to resolve their conflicts and, on June 3, 1975, entered into an agreement that averted the impending war. The terms of the agreement were not made public although it has been written that under the agreement, Syria was allowed to keep 40% of the flow of Euphrates within its borders while the rest was allowed to flow through to Iraq. The Southeast Anatolia Development Project, which consists of a massive project to build 21 dams and 19 hydroelectric plants on both the Tigris and the Euphrates Rivers, has again increased tensions between Syria, Iraq and Turkey. If completed as planned, the Project could significantly reduce downstream water quantity and quality. In 1980, Iraq and Turkey established a Protocol of the Joint Economic Committee to address the issues. In 1983, Syria joined the discussions but the meetings of the group have been intermittent and with limited success. Turkey and Syria signed an agreement in 1987, reportedly, but whether the agreement addressed Syria’s needs is not clear. Meetings between the three countries in the 1990s have similarly not resolved the potential water crisis. Like Syria, Jordan also faces a looming water crisis. Jordan shares both the Jordan and Yarmuk rivers with Israel and Syria. The river Zaqa is the only river exclusively supplying Jordan. At present, several Jordanian cities have rationed their water supply, and use the drip irrigation system as means of avoiding water shortage in an attempt to avoid a catastrophe as the regions’ population growth continues. How can we solve the water crisis in the Middle East and North Africa?
260 This escalating shortage of water can be met through the availability of more per capita fresh water. This can be achieved by educating people on the importance of preventing water waste and through population control. As controversial policies in Peru and China demonstrate, population control can only be achieved through the voluntary participation of the people involved; any policies tied to quotas will only breed hostility towards those implementing the policy. Similarly, water conservation requires changes in the use of water at the corporate, governmental and individual levels. Programs committed to recycling used water, including wastewater treatment plants, should be pursued. The dearth of media programs on the current water crisis is surprising. By initiating programs aimed at educating the public about the current shortages and the importance of water preservation, governments can begin to spread awareness of the issue. There are countries in the Middle East that do not face an immediate water crisis. However, with the adoption of erroneous policies they risk a water shortage in the near future. Turkey, a water rich nation by Middle Eastern standards, may soon face a political crisis if some of her over-ambitious projects are not abandoned. Turkey has extensive plans to construct dams to increase its water resources and hydroelectric power supplies. The Anatolia project for example, will result in the construction of 22 dams on the Tigris and Euphrates rivers, a detriment to both Syria and Iraq. To raise revenues for this project, Turkey is selling water from the Manavgat River to countries across the region, including Israel. While ambitious, Turkey’s plans will most likely lead to a zero net gain or loss of water in itself, while creating tensions with its neighbors in Syria and Iraq. Iran, with similar plans, is selling water to Kuwait. Without evidence that such plans can succeed in the long run, it is dangerous for countries with marginal water surpluses to adopt them. The improvement of desalination technology is promising as a means to meet the challenge posed by the water crisis. Although pioneered by the United States, Saudi Arabia, Japan, Israel and Germany are involved in serious and persistent research and have contributed significantly to making the process less expensive and therefore more practical as a solution to the water crisis. The experience of Saudi Arabia with desalination is worth noting. Extremely arid, Saudi Arabia’s terrain is 92% desert, with a low rainfall, depleted deep underground aquifers, almost no rivers and a fast growing population. Saudi Arabia has overcome these unfavorable natural disadvantages thanks to the foresight of its leaders and its substantial oil revenues. Through desalination plants, Saudi Arabia currently produces over 800 million gallons of fresh water per day, which amounts to 20% of its total consumption, the remainder being obtained from underground aquifers. Once a net food importer, Saudi Arabia now exports food and stockpiles wheat. Saudi Arabia currently exports water to Egypt. At present there are about 11,000 desalination plants in 120 countries, more than half in the Middle East. Although the cost for desalination has decreased, it is still more expensive than fresh water. Also important is the reduction or elimination in the “green house warming” phenomenon, sulfur dioxide from cars and factories that contribute to the pollution of water supplies. It is particularly in this area that cooperation among different countries is important. This brings me to suggest the importance of creating a body under the
26 1
auspices of the United Nations that should be devoted to resolving the impending global water crisis. Given the politics of water, the importance of creating a legitimate, nongovernmental organization devoted to finding equitable resolutions to issues arising from water related conflicts cannot be overemphasized. This includes resolving conflicts with regard to existing treaties, amending them to ensure that they are equitable and just and enforcing treaties that are being violated prior to outbreak of armed conflicts. The organization can also devise means to finally resolve long-standing conflicts where treaties have either never been adopted or are contested by a party. By facilitating discussions and negotiations leading to agreements for shared use of rivers by riparian countries, this organization could find the means of resolving long standing disputes. Such an organization can ensure that the treaties related to the control and utilization of shared waters proceed on the basis of mutuality, reciprocal obligations, equitable apportionment of the waters and a benefit system that is aimed at implementing what is just and reasonable. In determining what is just and reasonable, the following factors should be taken into account: historical background, population, development needs of the country, the extent of dependency of each state on the waters in question; the economic and social gains accruing from the alternative uses of the waters in question to the respective state and the entire region, as well as the physical and climatic conditions repercussions of a given decision. Finally, such an organization could devote resources to gathering existing research on water related issues including hydrological studies and finding efficient means of sharing such knowledge among countries facing similar crises. Only by recognizing that the current water crisis is global can we begin to address it. Some important steps have been taken in this direction. In 1997, with the support of the World Bank and the IUCN - the World Conservation Union, a World Commission on Dams was formed to evaluate the effectiveness of large dams for developmental purposes and to suggest criteria and standards for analyzing them. Upon publishing its report in November 2000, the Commission disbanded. The United Nations Environment Program has begun a Dams and Development Project (DDP) with the stated mission of improving decision-making, planning and management of dams and their alternatives. Egypt, Ethiopia and Sudan have agreed to design a project that will enable them to jointly utilize the Tekeze, Baro, Akobo and Nile rivers effectively and equitably. The success of the committee established to formulate this project will be important for the stability of the region. Similarly, the Nile basin countries have set up a Nile Basin Initiative Secretariat at Entebbe, Uganda to enable them to unite in the pursuit of sustainable development and management of the Nile. Despite these positive developments, the current policies and on-going trends in population growth and water consumption make a global water crisis unavoidable. It is my hope that by recognizing the crisis and committing to its resolutions that governments will find a means to ensure that the crisis does not become a catastrophe.
262 REFERENCES 1.
D.P. O’Connel, “International Law”, Vol.1 , second edition, London 1970.
2.
Paul Simon, “Tapped Out: The coming world crisis in water and what we can do about it”, New York 2001.
3.
Pirouz Mojtahed-Zadeh, “Evolution of Eastern Iranian Boundaries”, Ph.D. Thesis, University of London 1993.
3.
G.P. Tate, “The Frontier of Balouchistan: Travel on the border of Persia, and Afghanistan”, London 1909.
4.
Daniel Kendie, “Egypt and Hydro-Politics of the Blue Nile River”, Academic Forum.
5.
Alan J.A. “The Nile: Sharing a Scarce Resource”, Cambridge University Press 1996.
Also News Articles from New York Times, The Washington Post and The BBC.
THE UTILIZATION OF WATER RESOURCES FOR AGRICULTURE IN SYRIA: ANALYSIS OF THE CURRENT SITUATION AND FUTURE CHALLENGES M. SALMAN PTRID, AGL, FAO, Rome, Italy W. MUALLA Faculty of Civil Engineering, University of Damascus, Damascus, Syria INTRODUCTION The countries of the Middle East are characterized by large temporal and spatial variations in precipitation and by limited surface and groundwater resources. The rapid growth and development in the region have led to mounting pressures on scarce resources to satisfy water demands. The dwindling availability of water to meet development needs has become a significant regional issue, especially as a number of countries are facing serious water deficits (ESCWA, 1998). For Syria, located along the Mediterranean shores of the Middle East, water is becoming progressively scarcer as demand comes close or even surpasses available resources (Varela-Ortega and Sagardoy, 2002). Syria had a population of 18 million in 2002, and its Total Renewable Water Resources (TRWR) is estimated to be around 16 BCM per year. In other words, the per capita TRWR is less than the water scarcity index (1000 m3/person/year).Although this would still rank Syria amongst countries with moderate water stress, it will be soon classified as a country with severe water stress if its population continues to grow at its current rate (about 3%) and water use efficiency is not effectively increased (Mualla and Salman, 2002). In Syria, and until fairly recently, emphasis has been put on the supply side of water development. Demand management and improvement of patterns of water use have received less attention. Water managers and planners have given high priority to locating; developing and managing new water resources. The aim was always to augment the national water budget with new water. The most popular way of achieving this aim was to control surface flows by building new dams and creating multi-purpose reservoirs (there are now around 160 dams in Syria with a total capacity of 14 BCM). Imgation schemes were also built and agricultural activities were greatly expanded to achieve self-sufficiency in essential food products and food security. Over the years, however, the most attractive alternatives for the development of water resources infrastructure have already been implemented and it is hard to think of feasible alternatives for a further increase in supply. In addition, the cost of developing less accessible water is high and time consuming. Therefore, the emphasis must now be shifted and a new vision of the utilization of water resources for agriculture must be valorized. This paper provides a brief background on water supply and use in Syria. It describes the pressure on water resources for agriculture, analyses key issues and constraints facing this sector, and proposes a set of recommendations for efficient utilization.
263
264
WATER SUPPLY AND USE IN SYRIA In Syria, the total estimated water use volume is about 15 billion m3. The Euphrates and Orontes basins account for about 50% and 20% of the water use respectively. Table 1 shows water availability and use in the various basins of Syria. As shown in this table, water balance in most basins has been in deficit (except in the coastal basin and the Euphrates basin). This will be exacerbated further, especially in those basins encompassing large urban areas such as Damascus and Aleppo.
Table 1.: Water availability and use (Adapted from 2001 World Bank Report).
with highest from the sector. domi Imgation demand Domestic ema Industrial Total Use agriculture Water Deficit Resources
Basin
(m.m3)
(m.m3)
I
Yarmouk
360
I
70
1
10
(m.m3)
1
440
(m.m3)
1
500
(m.4
1
60
Aleppo
780
280
90
1150
500
-650
Orontes
2230
320
270
2730
3900
1170
BaraddAwaj
920
390
40
1350
900
-450
Coastal
960
120
40
1120
3000
1180
Steppe
1
I
(m.m’)
1
340
1
40
1
10
1
390
Euphrates
7 160
250
110
7520
Total
12750
1390
570
14700
%Share
I
87%
1
9%
I
4%
I
100%
1
700
1
N.A.
I
310 N.A.
I
-
Agriculture is the largest water-consuming sector in Syria, accounting for about 87% of water use. Domestic and industrial water use stand at about 9% and 4% respectively. While the urban water demands are rapidly increasing due to strong population growth rate (about 3% per annum) and industrial growth, new water sources are becoming scarce and extremely expensive to develop. Water deficits are expected to worsen, placing additional stress on all uses. Since drinking water needs are given top priority in the government’s policy, water availability for agricultural use could face severe constraints. PRESSURE ON WATER RESOURCES FOR AGRICULTURE Pressures on the country’swater resources come from all sectors of the economy with highest demand emanating from from the agricultural sector. Agriculture dominates with highest demand ema the agriculture sector. domi the Syrian economy. It contributes about 32% to the GDP, and employs nearly 3 1% of with highest demand ema from the agriculture sector. domi the workforce, with another 50% of the manufacturing force dependent on it for with highest demand ema from the agriculture sector. domi employment. 2000, thedemand cultivated land areafrom in Syriathe was agriculture estimated at 5.5 sector. million ha,domi which with In highest ema o f the accounted for about 30% of the totalfrom area of the agriculture country. Twenty per cent domi with highest demand ema the sector. cultivated land area (1.2 million was imgated. The Euphrates the with highest demand emahectares) from the agriculture sector. and domi Orontes basins provided for theema majority (Figure The total imgated area increased with highest demand from the1).agriculture sector. domi
265 from 650,000 ha in 1985 to 1.3 million ha in 2002 (Somi et al, 2001 and 2002). This rapid expansion of imgated agriculture is mainly attributed to the government policy objective of achieving food self-sufficiency and to the remarkable increase in groundwater imgation. Irrigated Area Distribution by Basin (%)
3
5 v
Yarmouk Aleppo Orantes I BaraddAwa
Coastal
6
Steppe Euphrates
Figure 1.: Irrigated area distribution by basin (Adapted from 2001 World Bank Report). Cereal and cotton production has been encouraged by the government at a policy level as a mechanism for ensuring the country’s self-sufficiency. The notion of self-sufficiency has recently been redefined into a more flexible concept oriented to increase production of certain crops that profit from comparative advantage, thus allowing export of these products to counterbalance the need to import other commodities (Sarris, 2001). The production of selective crops, especially wheat and cotton, has shown marked improvement when comparing consumption. The ratio of productiodconsumption for wheat has increased from 0.51in 1989 to 1.41 in 1997 while for cotton, it increased from 1.56 to 1.74 during the same period (World Bank, 2001). The high level of self-sufficiency and the increase in the production of selective crops appear, however, to have been at the expense o f unsustainable water use patterns. Groundwater use, particularly for imgation has increased dramatically over the last two decades. (Table 2). Sixty percent of all the imgated area in Syria is currently imgated by groundwater. Most are privately developed and operated.
266
with highest demand ema from the agriculture sector. domi Year
Surface Irrigated (1000 ha)
Groundwater Irrigated (1000 ha)
Total Irrigated Area (1000 ha)
1985
334 (51%)
3 18 (49%)
652
1990
351 (51%)
342 (49%)
693
1995
388 (36%)
694 (64%)
1082
2000
5 12 (42%)
698 (58%)
1210
2002
583 (43%)
764 (57%)
1347
A substantial portion of the increase in groundwater use is related to increased irrigation for wheat, cotton, citrus, and sugar beet. Area increases have been substantial in the last decade in sugar beet (32%), cotton (75%), irrigated wheat (40%), and citrus (40%). Much of the expansion in wheat has been driven by rapid expansion of its price while water cost has remained low. Farmers using public irrigation schemes obtain water at a highly subsidized rate, and groundwater costs do not reflect their real value because the energy required for pumping is also subsidized (Rodriguez, et al, 1999). Government policies have contributed to the tremendous increase in groundwater irrigation. Supported wheat prices, which have been higher than world prices for several years, coupled with subsidized energy costs, have proved to be strong incentives for farmers to undertake groundwater irrigation in many areas. This great expansion of groundwater-irrigated agriculture has, however, resulted in groundwater being overexploited in most basins of the country. Continuous decline in groundwater tables have been registered, affecting some surface sources such as spring flows and causing seawater intrusion into land areas adjacent to the sea. Traditionally, surface water has been developed widely in most basins and a large share of the surface water is supplied by dams. Though there still remains some potential for further development of dams and augmentation of storage volume, the cost for such exploitation is considered extremely high. Except for the Euphrates, most of the distribution system of the irrigation schemes have low conveyance efficiency not exceeding 40-50%. Even with the concrete lined canals used by the imgation schemes of the Euphrates basin, the conveyance efficiency still does not exceed 60-70% due to evaporation and poor maintenance (Salman et al, 1999). In order to improve the conveyance efficiency and to provide more reliable water supply to the fields, the Ministry of Irrigation has planned to convert old open surface distribution systems into pipeline systems and rehabilitate new lined-canal systems. Surface gravity system is the prevailing irrigation system at field level covering about 95% of the irrigated area in Syria. Basin irrigation is the predominant method used for wheat and barley. On-farm water use efficiency is in general low (40-60%) due to over-irrigation using the traditional basin irrigation method. Even with cotton and vegetables, which are irrigated by furrows, the efficiency is still low due to the lack, or inadequacy, of land-levelling. Thus, there seems to be considerable scope for increasing the efficiency of water use at field level by introducing advanced, on-farm
267 irrigation techniques like drip and sprinkler irrigation or by improving on-farm water management and water conservation. Moreover, urban water demand has rapidly increased in the country during the last decade due to strong population growth (around 3%) and industrial growth. The primary objective of the national water policy has always been the provision of safe drinking water. Ninety five per cent of the population in urban areas and 80% of the population in rural areas have access to safe, potable water. Urban and rural water supplies and sanitation facilities have been enlarged and upgraded regularly to accommodate the expanding population. Water balance in most basins has been in deficit. This will be exacerbated in those basins encompassing large urban cities like Aleppo and Damascus putting more pressure on water use for agriculture. The BaraddAwaj basin, where Damascus is located, has no significant water sources, either surface and groundwater, other than the Barada and Figeh Springs which supply drinking water to the inhabitants of Damascus. As most water resources of the basin are being dedicated continuously to support Damascus' increasing demand for drinking water, internal conflict over water has risen. Farmers, in the Damascus countryside, who have been using groundwater for irrigating their lands for years, have protested that their wells are drying up due to the massive groundwater extraction. MANAGEMENT, INSTITUTIONAL, AND POLICY ASPECTS Effective water use in agriculture is dependent on many factors: technical, economic, social, and political. These factors interact to dictate the overall efficiency of water use. If the dimensions of the challenge of water resources for agriculture in Syria is to be seized, and if water use is to be optimized, it is therefore important to critically identify which issues limit the effectiveness of water use and which factors are symptoms of underlying problems, and thus suggest practical solutions to overcome them. Factors affecting the productivity of water are many and they often have complex interactions. The following overview, however, aims at analyzing the key issues and constraints facing the productive use of water in agriculture in Syria. This analysis will take into account options at three levels: management, institutional, and policy, and will provide recommendations accordingly. Management aspects One of the main issues, and perhaps the most important one faced when carrying out any study about water resources and use in Syria, is the availability and reliability of data. It has been noticed that in spite of the existence in different studies and statements, e.g. World Bank, FAO, Individual Studies, Official Studies, the country's water balance seems to have no unity and water resource availability and use at basin levels seem to be different according to the source of information. In some studies, e.g. World Bank, it was clearly indicated that water resources management issues pertaining to international rivers as agreed with the government are not discussed and thus related data were not made available (World Bank, 2001). It was also indicated in the same study that water use estimates are provisional and need to be reviewed, and data on groundwater availability and quality despite numerous studies appears fragmented and scattered (World Bank, 2001). Since monitoring data are presently collected by different institutions (Ministry of Irrigation, Ministry of Agriculture, Ministry of Defence), there is a clear need for its consolidation and transparency, in a consistent and compatible manner, in a common
268 database that serves as a decision support system which can be accessed and shared by all institutes concerned. Groundwater is probably the single most important challenge facing Syria. As in many other developing countries, groundwater wells represent an "on-demand" source of imgation in contrast to government surface imgation schemes. Thus they provide a more reliable source of water to farmers. Legally, licenses are required to drill and use wells. Licenses specify the extent of water use and require renewal every ten years. However, poor enforcement has resulted in a large increase in the number of illegal wells in recent years (almost 50% of the total number of wells) that has contributed to the groundwater table declines in many areas, especially in the Damascus countryside (Table 3). The lack of critical information, such as the amount of renewable groundwater resources and the interaction between the surface and ground water systems, has made the task of enforcement more difficult, if not impossible. Thus, an urgent plan and action to rehabilitate and upgrade the hydrological monitoring network for groundwater resources and intensive monitoring needs to be established. Table 3.: Licensed and non-licensed wells by region, May 2002 (Ministry of Agriculture).
with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with domi with highest highest demand demand ema ema from from the the agriculture agriculture sector. sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi Latakia with highest demand ema from the agriculture sector. domi Al-Raqa with highest demand ema from the agriculture sector. domi At-Hassaka 18747 10351 29098domi with highest demand ema from the agriculture sector. Deir Al-Zour with highest demand ema2262 from the agriculture sector. domi Total highest demand ema 76777 95910 172687 with from the agriculture sector. domi While there seems to be over-exploitation of the non-renewable groundwater in most basins of the country, resources may be available for development in the Coastal and Steppe basins where the water balance is in credit (Table 1). However, caution should be given to such further development and careful review and confirmation on the amount of the estimated recharge should be done.
269 In critical basins where groundwater overdraft seems to be a serious problem, e.g. BaraddAwaj and Al-Khabour, immediate action to empower enforcement and ban illegal drilling has to be taken. In addition, a long term plan to remedy the problem could be implemented by introducing well-consolidation as an alternative to well-closure in conjunction with a pipeline distribution system and a modernized, onfirm irrigation system. The long-term plan involves the closure of private wells and the provision of water to farmers through a much more limited number of collective wells. It also involves the construction of a pipeline conveyance system and the arrangement for proper installation of modem on-farm irrigation systems at farmers’ outlets. This reduces well interference problems and allows wells to be carefully located where resources are sufficient. In addition, it establishes clear points where control could be exerted over extraction levels and water use efficiency could be encouraged (Mualla and Salman, 2002). This approach has been used in the Aleppo and Aljezira regions. However, indications from the Aleppo project show that, due to the lack of proper social assessment, conflicts amongst farmers on the use of water have bccn noticcd. Therefore, careful social assessment must be established in advance to identify the needs and concerns of farmers and so to avoid potential conflicts. The government of Syria, through the Ministry of Irrigation, has started an ambitious plan investing about 32 billion Syrian Pounds (600 million US$) for the next 4 years on the rehabilitation and modernization of old irrigation projects, to improve conveyance efficiency and minimize distribution losses through converting open imgation canal systems to pressurized pipe systems and rehabilitate lined-canal systems. Although the plan looks promising, the cost of implementing it is considered high (estimated average cost for the conversion into a pipe system is about $ 3,6004,000 / ha according to Ministry of Irrigation sources) and the capacities of staff to operate and maintain them are low. A study by the International Programme for Technology and Development in Irrigation Drainage (IPTRID) on Irrigation Modernization of the Old Alyarmook Project in Syria has concluded that careful technical upgrade has been found while managerial and institutional upgrades have not been given attention (Salrnan, 2002). Therefore, a parallel effort to strengthen the operation and maintenance capacities of staff is needed in order to achieve the ultimate goal of the upgrade. In recent years, the government has adapted the modernization policy at field level and encouraged farmers to change to modem irrigation techniques by providing tax-free, low-interest loans to cover the capital costs of modem techniques and technical advice on the implementation and use of such systems. However, the level of adoption of these techniques is still low due to the lack of confidence amongst farmers in the expected financial retum from use of such techniques to justify the investment and effort associated with them; the lack of incentives amongst farmers to invest in modernized, on-farm irrigation systems; inadequate technical support from extension services; and inappropriate interfacing between the public distribution system and the advanced on-farm irrigation systems. It is, therefore, important to the government to accelerate the pace in the process of modernization in order to increase water use efficiency at field level. However, careful attention should be given to the following issues: Selection of appropriate on-farm irrigation techniques, taking into account the local conditions and the available level of management skills; Provision of technical and extension assistance to farmers in installing, use and management of the modem techniques;
270 Strengthening the capacity of the Ministry of Agriculture's extension services that will ultimately be delivered to farmers; Introduction of improvements in land levelling through new techniques and better water distribution; Enhancement of the private sector in advanced irrigation equipment that will produce more affordable sprinkler and drip irrigation equipment and quality control of equipments; and Enhancement of incentives amongst farmers by reconsidering the subsidy policy on energy and water pricing. Finally, it is common practice in Syria to provide drainage simultaneously with irrigation when new lands are reclaimed and developed. Most of the drainage and salinity problems in Syria exist in the Euphrates basin. Both open and subsurface pipe drains are practiced there. Waterlogging and salinization are often due to poor irrigation management at farm level and poor maintenance of the drainage network. Although the government is currently converting open drains in problem areas into subsurface drainage in line with the conversion of irrigation canals into low pressure pipelines, improved on-farm irrigation and drainage has to go hand-in-hand. Institutional asDects Any discussion on the institutional issues in water management in Syria must start with recognition of the sensitive nature of the topic, and thus the limitations of the flow of information amongst the different organizations involved in the water sector. Recent studies have shown that most of the water resources in the country have been developed and there is an urgent need to make a transition from a water development to a water management approach. Such a transition requires an effective institutional framework with potential capacities. The greatest direct responsibility for water resources related activities appears to belong to the Ministry of Irrigation, through its headquarters in the capital and the different decentralized general directorates in the basins. The Ministry of Agriculture has only an advisory role on cropping patterns and on-farm water use as well as the role of providing basic extension and research services to farmers. According to recent studies by the World Bank, JICA, and FAO, it is apparent that the level of feedback amongst the different organizations is limited, and accountability and responsibility amongst them are sometimes blurred. Data and information are often kept within each organization and coordination and data exchange are low. From a management perspective, it appears that there is a major institutional limitation with regard to data collection and handling and a lack of cooperation between the organizations involved. Therefore, it is essential to focus on this matter and strengthen collaboration and exchange between the different organizations. The Ministry of Irrigation has made a step forward by establishing a Water Resources Information Center (WRIC) in collaboration with the Japan International Cooperation Agency (JICA). The long-term objective of this center is to achieve integrated and sustainable surface and groundwater management (both quantity and quality) in Syria. A water resources information system comprised of hydrological and meteorological observation stations, a computer system and computer network has been established at the main center in Damascus and at two basin centers (BaraddAwaj and the Coastal Basins). Other Basin Centers will be established at a later stage. The project also involves the preparation of a monitoring programme of meteorological, hydrological and groundwater water quality, and unconventional water data in BaraddAwaj basin and the Coastal basin, in the first stage, and in the remaining basins at a later stage
27 1
(JICA, 2002). However, there is still a need to link such programmes with other ministries and organizations, allowing the exchange of data and ending isolation. From a management perspective, it appears that the different ministries and organizations involved in activities related to agriculture have clear capacity limitations and lack staff skilled in the wide array of social, economic, and technical issues necessary for water management rather than development. Intensive and broad capacity-building programmes for staff at different levels, and in different domains, are necessary to establish an accurate understanding of water management and to provide a strong foundation for the transition from a development to a management approach. It is also noted that there is a clear lack of communication and outreach between water authorities and farmers. An IPTRID study on the Old Alyarmook project found that the project research units working on the application of the new onfarm irrigation techniques and on the determination of crop water demands thought they were doing well, but their results never reached farmers (Salman, 2002). Therefore, establishing communication channels between water agencies and farmers and re-establishing the farmers' confidence in the agencies is of utmost importance, and must be accomplished by strengtheningthe capacity of the extension services and transferring the research results to farmers. According to officials, over 140 laws that address water have been passed during the last 70 years. These laws, however, seem to have a fragmented nature and lack enforcement. Therefore, superseding and replacement of current laws is necessary and a comprehensive and unified law has to be established. The Ministry of Irrigation has recently drafted a bill for a new law. This proposed new law confirms established rights on public water but gives the government the authority to nullify them and requires compensation if this is done. The process will simply be based on land not water. The high degree of centralization in regulating and management of water resources as well as enforcement actions is also evident in the new law, i.e. allocating management decisions to higher levels. The new law also includes sections on water pollution control and legal actions in case of violation. It prohibits the disposal of wastes that may cause pollution, from any source, into any public waterway. The new legislation contains, however, no indication of how it would be implemented nor does it specify who would have responsibility for enforcement. The new legislation has not yet been enacted and is currently being considered by the Parliament. Finally, farmers seem to have a key role in the planning and management of the agricultural sector in Syria. This role, however, is only at high level policy through the farmers' union representation and does not expand to scheme level since water user associations do not exist. Although there may be no direct, organized role of farmers in managing water at scheme level, it may be a significant opportunity to develop user-based approaches and to establish an institutional framework for farmers to participate more in water management, i.e. to establish water user associations. Policy asuects As the economy of Syria has been primarily based on agriculture, agricultural policies have been given great attention by the government and placed at a high level of decision-making. Agricultural self-sufficiencyhas been the major stated objective of the government. The concept of self-sufficiencyhas been recently modified to be more flexible allowing the increase in production of certain crops and hence export of these products to counterbalance the need to import other commodities. Overall, selfsufficiency has shown remarkable improvement especially in wheat, cotton, and
272 barley production. Though this remarkable improvement has reached an acceptable level, ensuring both internal stability and buffering the country's exposure to the international market fluctuations, it came at the expense of unsustainable water use patterns. There are water deficits in most basins, especially with groundwater. Government, therefore, should review its agricultural policy regarding the production of wheat and cotton and encourage the growth of diversified high value and/or less water intensive crops as a primary avenue for increasing water availability and to ensure gains from its other policy of modernizing the existing irrigation systems. In recent years, the government of Syria has set a legal framework which responds to specific policy objectives. Each policy objective includes several policy strategies. One of the objectives related to water is the conservation of water resources and one of its strategies is to adopt modem irrigation techniques. In 2001 the government decided that all imgated areas would be equipped with modem irrigation techniques in 4 years and has established financial and technical measures to help farmers to convert to the use of the modem techniques. A study by the FAO, in collaboration with the Ministry of Agriculture in Syria, analysed different policy scenarios as regards to the adoption of modem irrigation techniques at macro as well as micro levels (FAO-MAAR, 2001). One of these scenarios at macro level, is the government's present policy of combining imgation modernization for a period of 4 years and irrigation expansion for a period of 15 years. The study has shown that in spite of the substantial impact that could be obtained with the modernization programme, the expansion of the imgated area has a marked counterbalancing effect. In other words, the current policy may only be sustainable for medium term and gains in deficits may not be remarkable. The study suggested a differentiated water basin policy that has an intensive plan of modernization directed at the most critical basins, e.g. Al-Khabour basin, with a lower rate of expansion in those basins while new irrigation areas are to be developed exclusively on basins with positive water balances, e.g. Coastal basin (FAO-MAAR, 2001 ;Varela-Ortega and Sagardoy, 2002). At micro level, F A 0 study has indicated that, although modem imgation techniques are substituted for traditional irrigation systems, the new system would not insure an effective use of water. Farmers benefit from subsidies for water, energy, and products. The study of IPTRID (2002) has shown that farmers have no incentives to use water efficiently even with the new techniques since the fees are not related to the volume of water used but based on a flat rate per unit of land they own (Salman, 2002). It is, therefore, necessary for the government to consider new water tariffs that will incite farmers to use water more efficiency, and make sure that the result of transforming the on-farm imgation system will not be dispersed. Evidence has shown that the recent lifting of the subsidy on the price of electric energy and diesel fuel has undoubtedly contributed to groundwater conservation. Similar increases in the early nineties have affected many irrigators in the country and forced them to reduce their total volume of groundwater pumping (Mualla and Salman, 2002). However, reforms in irrigation tariffs will not be sufficient to improve sustainable water use unless they are accompanied by other, non-price measures like the transferring of management responsibilities to water users (e.g. the establishment of water user associations). Finally, none of the measures, whether price or non-price, will be effective in increasing the efficiency of water use unless a clear shift from water mobilization to demand management occurs in the courses of thought at all levels of decision making. This shift will combine all measures: technical, legal, financial, and institutional.
273 SUMMARY AND CONCLUSIONS
Water balance for Syria indicates that most of the basins are in deficit. This will be exacerbated further especially in basins encompassing large urban areas, and if the country’s population continues to grow at its current rate (about 3%) and water use efficiency is not increased effectively. New water resources are becoming scarce and extremely expensive to develop. Therefore, shifting from water development/water mobilization to water management, in particular water demand management, is necessary to reach sustainability in water use and agriculture. While there are many hurdles and challenges in proper development and management of the agricultural sector in Syria, these can be overcome by suitable planning and implementation of management, institutional and policy measures. From a management perspective, there is an urgent need to strengthen data collection and improve analyse, as this is central to developing an accurate understanding of water management challenges and options. While groundwater irrigation has rapidly expanded in all basins for the last two decades, over-exploitation has been a serious problem that requires careful management and enforced legislation to control. Though modernization has been introduced at both distribution and onfarm levels, technical and managerial problems seems to exist, urging the need to take into account the local conditions and the available level of management skills before application. Capacity building appears to be crucial as organizations involved in activities related to water have limited capacities and skilled staff. Intensive programmes for staff at different levels, knowledge sharing, and performance enhancement are required. Financial reforms, as well as the enhancement of water users in management, have a key role in sustainable agriculture sector. New legislation with strong enforcement and water user associations, therefore, need to be established. Finally, the preparation of national policies, and the adjusting of those existing, should be carefully considered in recognition of priorities or in response to major shifts or needs. Where decisions are based on good cost and benefit data, trade-off decisions are more trunsparent.
REFERENCES 1.
ESCWA, Economic and Social Commission for Western Asia (1998). “Survey of economic and social developments in the ESCWA region 1997-1998”, United Nations, New York, Report EESCWA/ED/1998/5.
2.
FAO-MAAR (2001). “The Utilization of Water Resources for Agriculture in Syria”. FA0 Report GCP/SYR/006/ITA.
3.
JICA (2002). “Establishment of Water Resources Information Center in the Ministry of Irrigation, Syria”. Project document.
4.
Mualla, W., and Salman, M. (2002). “Progress in water demand management in Syria”. Proceedings of Water Demand Management in the Mediterranean Region Conference, Fiuggi, Italy, October 2002.
5.
Rodriguez, A,, Salahieh, H., Badwan, R., and Khawam, H. (1999). “Groundwater Use and Supplemental Irrigation in Atareb, Northwest Syria”. ICARDA Social Science Paper No.7. ICARDA, Syria.
274 6.
Salman, M. (2002). “Case Study on Irrigation Modernization of Old Alyarmook Project in Syria”. Study Report. IPTRID, FAO, Rome, Italy.
7.
Salman, M., Burton, M., and Dakar, E. (1999). “Improved Irrigation Water Management, or Drainage Water Reuse: A Case Study from the Euphrates Basin in Syria”. Proceedings of the 2”d Inter-Regional on Environment-Water. Lausanne, Switzerland, September 1999.
8.
Sarris, A. (2001). “Agriculture Development Strategy for Syria”. FA0 Project GCP/SYR/006/ITA Report. Assistance in Institutional Strengthening and Agricultural Policy. Rome, Italy.
9.
Somi, G, Zein, A, Dawood, M, and Sayyed-Hassan, A. (2002). “Progress Report on the Transformation to Modem Irrigation Methods until the end of 2001”. Internal Report, Ministry of Agriculture and Agrarian Reforms, Syria (in Arabic).
10.
Somi, G., Zein, A., Shayeb, R., and Dawood, M. (2001). “Participatory Management of Water Resources for Agricultural Purposes in the Syrian Arab Republic”. Internal Paper, Ministry of Agriculture and Agrarian Reforms, Syria (in Arabic).
11.
Varela-Ortega, C., and Sagardoy, J.A. (2002). “Analysis of irrigation water policies in Syria: Current developments and future options”. Proceedings of Imgation Water Policies: Micro and Macro Considerations Conference, Agadir, Morocco, June 2002.
12.
World Bank (2001). “Syrian Arab Republic Irrigation Sector Report”. Rural Development, Water and Environment Group, Middle East and North Afica Region, Report No. 22602-SYR.
THE LOWER JORDAN RIVER URI SHAVIT, RAN HOLTZMAN AND MICHAL SEGAL Faculty of Civil and Environmental Engineering, Technion Israel Institute of Technology, Haifa, Israel ITTAI GAVRIELI Geological Survey of Israel, Jerusalem, Israel EFRAT FARBER AND AVNER VENGOSH Department of Geological and Environmental Sciences Ben Gurion University, Beer Sheva, Israel
PREFACE The paper presents the results of an ongoing collaborative effort between Jordanian, Palestinian and Israeli researchers in an attempt to identify the sources and mechanisms of salinity and pollution along the Lower Jordan River. The outcome of the study and the improved understanding of the hydrological system will benefit the local communities, provide the water authorities with better decision tools and safeguard, we hope, the long term sustainability of this unique ecosystem. The study results were presented in the Water Conflicts session of the International Seminars on Planetary Emergencies, in August 2003, in Erice, Sicily. The paper does not emphasis regional water conflict issues but rather demonstrates the great potential of collaboration. INTRODUCTION Water sources in arid and semi-arid regions are overexploited. Many rivers in these regions became saline, polluted and now deliver low flow rates, which endanger their future sustainability (Pillsbuny 1981, Williams 2001). The Lower Jordan River is an extreme example of such a river, where the combination of excessive water needs and lack of environmental attention has led to a devastating drying process of the river (Salameh, 1996). The Lower Jordan River stretches between the Alumot dam (downstream from the Sea of Galilee, 32'42'N, 35'35'E, 210m below sea level) and the Dead Sea (31'47", 35'33'E, 410m below sea level) with a catchment area of about 15,000 km2 (Efrat, 1996; Samaleh, 1996; Hamberg, 2000). The aerial length of the river is about 105 km, while the real length, along its meanders, is about 190 km (Hamberg, 2000). This paper focuses on the northern part of the Lower Jordan River, starting at Dalhamia (site #6 in Fig. 1) and ending at the Hamadia pumping station (site #31). The area under investigation is occupied by rural settlements and the majority of the land is used for agriculture (e.g. field crops, dates, plants and fisheries). Tributaries include natural streams and artificial canals (e.g., agricultural and fishpond drainage) with season-dependant flow and chemical characteristics. The river flow has decreased from about 1300.106m3/year at the outlet to the Dead Sea to very low flow rates that were estimated around 100-200~106m'/year (Salameh and Naser, 1999). The historical main tributaries included the Upper Jordan River flowing through the Sea of Galilee (540.106 m3/year), the Yarmouk River (480.106m3/vear)and local streams and runoffs (Hof, 1998). Since the construction of
275
276 water supply projects in Israel, Jordan, and S p a , the Sea of Galilee and the Yarmouk River have been blocked and no fresh surface water flows into the river except for rare flood events and negligible contributions from small springs. Currently, the only two water sources at the starting point of the Lower Jordan River are the effluent of the Bitania wastewater treatment plant and the Saline Water Carrier (sites #1 and #2 in Fig. 1). The Bitania source includes poorly treated human and animal waste effluents. The Saline Water Carrier contains a mixture of saline spring water diverted from the Sea of Galilee and urban sewage effluents. As a result of the degradation of water quantity and quality, the Lower Jordan River has become brackish (Salameh, 1996). In the past, when the flow rate of the Lower Jordan River was high, the influence of groundwater seepage and agricultural return flows was negligible. However, both sources have the potential to become significant following the sharp decrease in the river flow and the growing agncultural activity along its banks (Farber et al., 2003). Nevertheless, no quantitative information was available until now and the quantities and qualities of the potential subsurface contribution to the river water were not known. We will show that the river chemistry is changing along its course and that surface inputs cannot explain the overall chemical variations. In the following sections we will show the results of flow-rate measurements, mass balance calculations and geochemical analysis by which the chemistry of the northern part of the Lower Jordan River can be attributed to inputs reaching the river through the subsurface. The results were obtained from a collaboration between Jordanian, Israeli and Palestinian researchers through the years 2000 and 2001. The rainfall in these years was very limited, and the study results represent, therefore, the river base flow under drought conditions.
277
Fig. 1-The northern part of the Lower Jordan River (sites are listed in table 1)
278 Table 1: Sampling and Discharge measurement sites along the Lower Jordan River (Fig 1).
with highest demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi with ghest hi demand ema from the agriculture sector. domi with highest demand ema from the agriculture sector. domi
5 or 5 or or 55or
X is the aerial distance ftom site #3, ‘JR’ reoresents the Jordan River sites. ‘W’ and ‘E’ reuresent western and eastern tributaries (streams and drainage canals), GW represents groundwater sampling through boreholes, wells or springs, D represents agricultural draining, and F represents fishponds. 1
-
METHODS Water sampling and chemical analvsis The waters of the river and its tributaries on both sides of the Jordan River were sampled between September 1999 and August 2001. Other water samples were collected fiom fishponds, agricultural drainage canals and different groundwater resources. All samples were stored at 4OC prior to analyses, filtered through 0.45pm Millipore membranes and then analyzed at the Israel Geological Survey using ion chromatography. The methodology and results of additional chemical and isotope analyses obtained for these water samples are described in detail by Segal et al. (2003) and Farber et al. (2003).
279
Flow rate measurements Flow rate was measured during 5 fieldtrips between February 2001 and August 2001 at three river cross-sections along the northern part of the river, at two cross sections along the southern part (not reported here) and in the river eastern and western tributaries (as near as possible to their confluence with the Jordan River). Flow rate measurement technique was adjusted according to local conditions. In particular, since the river centerline serves as the international border between Israel and Jordan, the research teams were not allowed to cross the river and a special measurement procedure was developed accordingly. A portable acoustic Doppler velocimeter (Argonaut-ADV, Sontek, USA) was mounted on a vertical pole held by a specially designed floating traverse construction (Fig. 2). By cruising the floating construction across the river using light magnesium poles, both water velocity and riverbed profiles were obtained. The pole is capable of moving up and down using a step motor and a control cable. The immersion depth of the instrument was reported by an internal pressure gage (kl cm), the internal compass and tilt sensors reported its orientation and the lateral location was measured along the magnesium poles (k5 cm). Serial communication and a portable computer were used for instrument control and data recording. The three-component velocity vector was measured with high signalto-noise ratio thanks to the high turbidity of the river. The measurement of the riverbed profile utilized the boundary reflection signal and the ability of the instrument to separate it from the velocity signal. In combination with the built-in pressure gage, the measurement of the lateral location, river bathymetry was measured with respect to water level, keeping a relative accuracy of -2% (Holtzman, 2003).
Fig. 2 -The floating traverse construction, the acoustic Doppler velocimeter (ADV) and the field setup used for discharge measurements.
The ADV was programmed to measure the velocity vector 2,000-10,000 times (at a frequency of 10Hz) within its -0.25 cc sampling volume for each point, resulting with an estimated relative accuracy of 1% (Sontek, 2000). Post processing was applied to remove false data and an average value was obtained. An electromagnetic velocimeter (Flo-Mate Model 2000, Marsh-McBirney Inc., USA) was used to measure velocities in the western tributaries, while a dipping bar (Hydro-Bios, Germany) was applied in the eastern tributaries. Velocity was measured with the electromagnetic velocimeter 5-10 times at each point with a sampling frequency of 30 Hz (total average of 750-1,500 measurements at each point). The accuracy of the
280 electromagnetic velocimeter measurements was estimated as 2% with a 1.5 c d s zero offset induction (Marsh-McBimey Inc., 1990). The accuracy of the dipping bar measurements was estimated as 20% (Von-Bomes, Hydro-Bios, personal communication, 2003). Flow rate was obtained by an integration of the scalar product between the velocity vector and the cross-sectional area vector at 30 to 50 points across each cross-section of the river and at 5 to 20 points across the tributaries. It was found that for most cases velocity vertical profiles fit a power low, u = az"', where u is the velocity component perpendicular to the cross section, z is the height ffom the riverbed, and a and m are constants that were calculated using linear curve fit procedure. For each cross section, a choice was made between an integration of the power low,
and a simple 2D integration scheme,
which provided for some of the river cross sections and for all tributaries a better fit then Eq. 1. Here j is the column index, n is the number of columns, k is the cell index, h,,k is the cell height and K,,, is the number of cells within the column. b, is the width of the column and H, is its height. The potential maximum error generated by Eqs. 1 and 2 was estimated as 6% for the river flow and 19% and 29% for the western and eastem tributaries, respectively (Holtzman, 2003). Finally, electronic water level gages equipped with data loggers were installed in June and August 2001 to adjust the value of the river cross section area and to evaluate the potential error in our mass-balances calculations, due to deviation from the assumed steady state conditions. The estimated relative accuracy of the measured water depth and cross-section width is 5% and 2% respectively.
281 RESULTS AND DISCUSSION The river chemistry The river chloride concentration as a function of aerial distance is shown in Fig. 3. The river initial salinity stems form the Saline Water Carrier. Then, three distinct sections are identified; the first 25 km along which chloride concentration decreases, a middle section with little variation and 35 krn section that shows a significant salinity increase. As evaporation cannot explain the decrease along the first 25 km,the relative influence of tributaries inflow, agricultural return flow and groundwater interaction should be examined.
Site # 1
12 21
31
46
66
53
76
84
89 b
+September 2000 -t December 2000 +Februaly2001 +March 2001
0
10
20
30
40
50
60
70
80
90
100
Distance (km) Fig. 3 - Chloride concentration along the Lower Jordan River. The chloride variations led to a geographical classification into three river sections; the first 25 km along which chloride concentration has reduced, a middle section with few variations and the last 35 km that show a significant salinity increase.
Farber et al. (2003) have analyzed the geochemical evidence collected from the river and its surroundings. They have shown that the chemical and isotopic compositions of tributary inflows are inconsistent with the trend observed in the river and cannot account for the river chemical and isotopic modifications. The tributaries inflows have higher 87Sr/86Sr(western and eastern tributaries) and lower so4/c1 ratios (western tributaries) when compared with the river water. The chemical and isotopic variations recorded in the Lower Jordan River seem to be caused by waters with a and low 87Sr/86Sr composition that consists of high NdCl, high SOdC1, low 634Ssulfate values. As will be shown, the mass balance calculation provides a similar conclusion. A mixing process between two distinct water bodies may be represented by a mixing equation that leads to a straight line when plotting one conservative constituent versus another. Sulfate in the Lower Jordan River is considered to be a conservative constituent. It increases, on average, by more than 300 rngS0dL along the first 25 km alone. These findings make sulfate an ideal tracer. Fig. 4 shows the
282 variation of the river sulfate versus chloride concentration. The classification of the river into three distinct sections is shown again. Where in the first section sulfate increases and chloride decreases, the two constituents increase along the flow of the last section. Fig. 4 shows that the variations along both sections follow a straight trend line, suggesting that two different mixing processes control the river chemistry in its northern section and in its southern section. In order to verify this conclusion we have used flow rate measurement and mass balance calculation as described in the following. 1200
-
2
1000
800 600
6 * 400
.+:
Hamadia - Zarzir
B
200 . .
0 1000
1500
2000
1 .August 2500
2001
3000
CI (mg/L) Fig, 4 - Sulfate versus Chloride concentration in the Lower Jordan River. Arrows AB and BC mark the average trend in the northern and southern sections, respectively.
Flow rate measurements Flow rate measurements show that the base flow discharge of the Lower Jordan River (500-1,100 U s ) is about 40 times smaller than the historical flow rates. The results around the river segment between site #21 and site #31 (Fig. 1) can be found in Table 2. The low flows increase the relative influence of other sources, once negligible compared to the river flow. It enables a quantitative examination of the geochemical and isotopic evidence, which shows a considerable contribution of return flows and groundwater interaction (Farber et al., 2003). The quantitative analysis is based on a mass balance approach kom which the discharge and the chemistry of these sources were computed.
28 283 Table 2: Measured discharges (L/s) and water mass balance calculations (L/s) along N2 segment between site #21 and site #31 (Fig. 1) in the Jordan River. Influx is marked as positive. Qzl and Q3, represent the measured flow rate of the river at the inlet and outlet of the segment. The other flow rates represent tributaries, pumping stations and evapotranspiration. Sign I Site ~02/2001~03/2001~04/2001~06/2001~08/2001 Qzl
I
I
Neve-Ur(NorthJ
Q21,p Neve-UrPump(N)
I
I -160
967
862
-233
-231
160
165
0
0
49
-80
-80
-49
839
'
1087
I
808
1
5 or QZZ
I
WadiElArab
I
Drainage Canal 76
QZ3
QZ4,p Neve-Ur Pump (S) Fish Pond outlet Qz6 028
I
Q29,p Q30
Doshen Pumps Doshencanal
I
0
0
0
18
30
30
-244
0
-231
-V7
12
13
-125
-118
I-I109
-984
-787
I
-15
-25
-26
IMass balance results1 671
274
323 -
Q3,.p Q31
Wadi Teibeh
165
I
ET(NJ1
Zor Pumps Hamadia
Evapotraspiration
0
1
9
200
I
292
I
Mass balance calculations Water balance calculations were conducted using the flow rate measured at the inlets and outlets of different segments of the river, the measured and reported pumping rates and evapotranspiration estimates in each segment. The water balance equation is written as follows,
and Qotlt,iare the measured flow rates in inlet and outlet i with non) Where such inlets and n(o,t) outlets (including pumping stations). qin(x) and qo&) are the distributed fluxes (flow rate per unit river length) along a segment between X I and x2. V is the water volume along the segment, B '(x) is the effective width of the river that takes into the account the vegetation influence and ET(x) is the rate of evapotranspiration (flow rate per unit area). When assuming steady state conditions (aV//at = 0) and zero distributed outflux ( qou,(x)dx= 0), the total flow rate of the subsurface contribution ( qin( x ) d x ) can be'calculated. Table 2 shows a list bf the total flow rates (L/s) measured between site #21 and site #3 1 (Fig. 1). Qzl and QJI represent the measured flow rate of the river at the inlet and outlet of the segment. The other flow rates represent tributaries, pumping stations and evapotranspiration. The result of the mass balance calculation (Eq. 3) is shown at the bottom of the table. Note the small influence of the evapotranspiration component. Also note that the June calculation contains somewhat higher uncertainties as the water from one of the fishponds was released to the river during our measurements. The inflow of the unknown source is about 200-240 L/s between sites #12 and #21 (not shown in the Table) and 200-670 L/s between sites #21 and #31. It
r
284
contributes 2040% of the river's measured discharge. This result confirms the geochemical conclusion of the existence of a subsurface component, which enters the river and modifies its chemistry. The chemical analysis of the water samples that were collected in all inlets and outlets simultaneously with the flow rate measurements allow us to obtain mass balance calculations for the conservative solutes. In particular, we have obtained such calculations for chloride, sulfate and sodium using the following equation,
Where Cf is the river concentration of constituent s, Cf is the concentration at the inlets and Ci is the concentration of the distributed influx that enters the river through the subsurface. As before, when assuming steady state conditions (a(b'c:)/at = 0) and zero distributed outflux ( q, t(x)C:(n)dw = 0), the total mass flow rate of the subsurface contribution ( jqin(x)&(c@) can be calculated. Table 3 shows the measured and computed solutemass flow rate in the river (site #31) and through the subsurface. It also shows the ratio between the subsurface contribution and the river mass flow. It is apparent that in most cases the subsurface contribution is significant and that the chemistry of the river indeed changes accordingly. It also shows that the sulfate influx is higher and that the results of the chloride and sodium are similar. It is clear that the end member(s) that influence the river chemistry contain high sulfate concentrations and should be searched for.
285 Table 3: Measured and computed solute mass flow rate in the river and through the subsurface. Period
CI (g/s)
SO4 (g/s)
Na (g/s) 447
subsurface Hamadia (#31)
900
327
1656
510
808
Ratio
0.54
0.64
0.55
subsurface Hamadia (#31)
449
180
202
03/01
1673
482
772
Ratio
0.27
0.37
0.26
470
210
220
04/01
subsurface Hamadia (#31)
374
64 1
02/01
Ratio
06/01
08/01
I I
1330 0.35
I I
0.56
I
0.34
subsurface
109
Hamadia (#31)
1939
465
928
Ratio
0.06
0.41
0.12
subsurface Hamadia (#31)
304
126
147
827
177
390
Ratio
0.37
0.71
0.38
190
114
A comuarison between the mass balance results and the geochemical analysis The concentration of species in the subsurface influx was calculated by dividing the mass flow rate of each of the solutes by the water volumetric flow rate, given that the assumption of zero distributed outflux holds. These calculations are used to compare the mass balance results with the geochemical analysis. The calculated concentration of chloride and sulfate in the distributed subsurface influx were plotted in Fig. 5 together with the measured concentration of river samples as appeared earlier in Fig. 4. As the complete mass balance calculations were obtained for segments in the northern section only, the data from the southern section (line B-C, Fig. 4) was removed. Measured concentration of water samples that represent potential endmembers such as fishponds, agricultural drainage and tributaries were added to the plot. Note that except for the Yarmouk River, the chemistry of the potential eastern tributaries (e.g. Wadi El Arab and Wadi Teibeh) is outside the scale of Fig. 5 . It shows that the samples taken from fishponds, western tributaries and the well (Hamadia well) that represent the shallow groundwater on the west side of the river are not consistent with the river trend. Samples that were collected from the Saline Yarmouk River and from some of the agricultural drainages can be considered as representing the end members that change the river chemistry. Fig. 5 shows that the results of the mass balance calculations lie between the chloride/sulfate data points of the Saline Yarmouk River and those of the Jordan River.
286 1200
I
Y
, X
&/
1000
1
800
,,
06/01
s?
P
' L
X
Western inflows
cl
Hamadia well
0
Fishponds
0
Drainage
X
SalineYarmouk
600
w
A 0
400
Subsurface X
200
Alumot
800
1300
1800
2300
2800
Cl (mg/L) Fig. 5 - A comparison plot representing three different types of concentrations; (1) measured along the northern section of the Lower Jordan River, (2) measured in water sources that represent potential end-members and (3) calculated values representing the chemical composition of the subsurface flow. The Saline Yarmouk River, between sites #14 and #17 (Fig. l), constitutes a unique hydrological configuration that assists in the identification of the distributed subsurface influx into the Lower Jordan River. The Yarmouk River is dammed some 8 km east of its confluence with the Jordan River and its water is diverted to the King Abdullah Canal (for the most part) and Yarmuchim Reservoir (#13, Fig. 1). Although no tributary inflow exists beyond the dam, the nearly zero flow-rate immediately downstream from the dam increases significantly along the Saline Yarmouk River, towards its confluence point with the Jordan River. While salinity upstream from the dam is very low (140 mgCNL), the salinity of the downstream water is high (>lo00 mgCNL). It should be noted that a pumping station at the end of the Saline Yarmouk River (site # 17), pumps most of the Saline Yarmouk surface water for fishery and imgation uses. Hence the direct inflow from the Saline Yarmouk River is minimal and, therefore, has minor impact on the water chemistry and flow rate of the Lower Jordan River. Nevertheless, Fig. 5 shows that the geochemical signature of the Saline Yarmouk River is consistent with the chemical modifications observed along the Jordan River, both upstream and downstream from the Yarmouk. Consequently, the Saline Yarmouk water may be considered as an analog of the Jordan River's unknown source. In addition to the consistency found in major chemical constituents such as chloride and sulfate, Farber et al. (2003) and Segal et al. (2003) showed that the isotopic composition and the concentration of nitrogen compounds of the Saline Yarmouk River (and in samples taken from some agricultural drainages) agree with the change observed in the chemistry of the Lower Jordan River. Farber et al. (2003) suggest that the distributed subsurface influx is derived from agricultural return flows, enriched with sulfate through mixing with natural groundwater sources or the addition of fertilizers.
287 Possible natural sources include shallow groundwater or water rising from deep formations, which may interact with surrounding rocks or brines. The existence of adjacent thermal wells and springs (Bajjali et al., 1997) support such a hypothesis. Possible anthropogenic sources include leaking reservoirs, agricultural return flows and effluents. Note that both the mass balance calculations and the geochemical analysis cannot separate the different sources and does not distinguish between western and eastern subsurface sources, thus the calculated source is likely to represent a mix of several end-members. SUMMARY AND CONCLUSION The reported study represents the Lower Jordan River discharge and chemistry under base flow conditions during drought years. A decrease of the river flow rate much below the minimum allowable value is to be expected if the river-related chapters of the Israeli-Jordanian peace treaty were implemented. The treaty includes an increase in the over-all pumping rights, a reduction of wastewater disposal into the river, and the desalinization of the saline water that currently flows into the river. Calculation of the impact of these steps on the river flow and chemistry shows that, although some measures will improve the water quality, these steps should be implemented without the allocation of an alternative water source. Should the riverrelated section of the Israeli-Jordanian peace treaty be implemented with no modifications, the Lower Jordan River is expected to dry up during base flow conditions. It was confirmed that at low flow rate conditions, the impact of groundwater seepage and agricultural return flows is significant. By exercising a combination of careful discharge measurements, a complete account of all surface inflows and outflows, mass balance calculations of both water and conservative constituents and geochemical tools, we were able to characterize the distributed subsurface influx that affects the river water flow and chemistry. This was possible thanks to excellent cooperation between the Israeli, Jordanian and Palestinian research groups. REFERENCES 1.
Bajjali, W., Clark, I.D., and Fritz, P., 1997. The artesian thermal groundwaters of northern Jordan: insights into their recharge history and age. Journal of Hydrology, 192: 355-382.
2.
Efrat, E., 1996. The land of Israel - Physical, settlement and regional geography, Tel-Aviv University, pp. 237-242,245-25 1 (in Hebrew).
3.
Farber, E., Vengosh, A., Gavrieli, I., Marie, A., Bullen, T.D., Mayer, B., Holtzman, R., Segal, M., and Shavit U., Hydrochemistry and isotope geochemistry of the Lower Jordan River: Constraints for the origin and mechanisms of salinization. To be published in Geochimica Cosmochimica Acta, 2003.
4.
Hamberg, D., 2000. Flows in the Lower Jordan River, Report for the Office of National Foundation, 6130-d00.385. Tahal, Israeli Water Division office, Israeli Office of National Foundation, Tahal Consulting Engineering LTD (in Hebrew).
288 5.
Hof, F.C., 1998, Dividing the Yarmouk’s waters: Jordan’s treaties with Syria and Israel, Water Policy, I, 81-94.
6.
Holtzman, R., 2003. Water Quality and Quantities Along the Jordan River: Salinization sources and mechanisms. MSc. thesis, Faculty of Agricultural Engineering, Technion, Israel (in Hebrew).
7.
Marsh-McBirney Inc., 1990. Model 2000 installation and operations manual. Marsh-McBirney Inc., Maryland, USA.
8.
Orthofer, R., Daoud, R., Fattai, B., Ghanayem, M., Isaac, J., Kupfersberger, H., Safar, A., Salameh, E., Shuval, H. and Wollman, S., 2001. Developing sustainable water management in the Jordan Valley. Joint synthesis and assessment report. Final report to the European community, No. ERBIClSCT970161
9.
Pillsburry, A.F. (1981) The salinity of rivers. Sci.Am. 245, 54-65.
10.
Salameh, E., 1996. Water quality degradation in Jordan. Royal Society for the Conservation of Nature, Amman, Jordan.
11.
Salameh, E. and H. Naser, 1999, Does the actual drop in Dead Sea level reflect the development of water resources within its drainage basin? Acta Hydrochimica Hydrobiologica,27, 5-1 1.
12.
Segal, M., Shavit, U., Holtzman, R., Vengosh, A,, Farber, E., Gavrieli, I., Bullen, T., Mayer, B., and Shaviv, A., 2003, Nitrogen Pollutants, Sources and Processes along the Lower Jordan River. Submitted to J. of Environmental Quality.
13.
Sontek, 2000. Sontek Technical Notes, Argonaut ADV Principle of Operation. San Diego, California, USA, TO ADD WEB ADDRESS.
14.
Von-Borries, J., Hydro-Bios Apparatebau GmbH, Gel-Holtenau, Germany. Personal communication, 2003.
15.
Williams, W.D. (2001) Anthropogenic salinization of inland waters. J. Hydrobiol.466,329-337.
CHALLENGES TO WATER MANAGEMENT IN THE MIDDLE EAST
MUNTHER J. HADDADIN, Ph.D Courtesy Professor, Oregon State University and Visiting Professor, University of Oklahoma Former Minister of Water and Irrigation The Hashemite Kingdom of Jordan Challenges to water management in the Middle East are essentially distortions in the water market and their treatment. Domestically, there are discrepancies between supply and demand; between the supplied and the metered water flows; between the real cost of water and the levied charges of water tariffs; between the tariff structure and the income distribution patterns; between the needed qualified manpower and the supply thereof; and between the awareness of consumers and the level of education level needed to manage the resources optimally. Internationally, the challenges are focused on the terms of trade, the attraction of capital and know-how for development, the import and adaptation of technology and the management of internationally shared watercourses and aquifers. Other challenges emanate from the direct links water has with energy and the environment. Treatment of these distortions varies in the countries of the region, but all find themselves resorting to the import of food commodities to bridge the gap between supply and demand. Shadow water plays an extremely important role in water management in the countries of the Middle East and most arid and semi-arid countries. Its cost is far below the marginal cost of water in any of the countries of the region. Facing these challenges is a formidable task to be undertaken by water managers and by governments. This paper attempts to identify the major challenges and proposes an approach to face them. Key words: water market, shadow water, virtual water, soil water, water tariff, water threshold.
1.
BACKGROUND
The Middle East is primarily an arid and semi-arid region. Most of its land area is desert or Badia. Except for the Fertile Crescent and the southwestern comer of the Arabian Peninsula, food production is not possible without irrigation. The variability of rainfall, both seasonally and inter-annually, poses a serious problem for the efficiency of water and land use, which is low already, and negatively impacts productivity of rain-fed lands. The population growth, both natural and migration induced, is among the highest in the world. A natural growth rate of 3.8% is not uncommon in some of its countries, and the involuntary migration of some of its peoples to neighboring countries exacerbated the imbalances in the population-resources equation. Jordan for example, a recipient of several waves of refugees and displaced persons from Palestine averaged an aggregate
289
290 annual growth rate of 6% during the second half of last century. Open immigration of Jews to Israel created similar pressures on its resources. Countries of the region fall into the four income categories as defined by the World Bank’. Their water requirements are a function of their ability to achieve higher water use efficiencies, and of their food diets. Furthermore, the productivity of land and water in the countries of each category is a function of its affordability to install the hardware (water conveyance and controls) and qualify the human resources to boost the productivity per unit land area and per unit water flow. Modem water and agricultural technology, while successfdly transferred to some countries, has not taken root in some other countries of the region. The water legislation and institutional arrangements, while reasonably alike in most of the countries of the Middle East leave much to be desired in the field of law enforcement and institutional modernization. Renewable water resources per capita range from very little in countries like Kuwait, Jordan and Palestine, to plenty like Iraq. The population pressure on water resources is such that municipal water is pumped to the people once a week in some capitals like Amman, and less frequently in some other towns. Cities in the region like Damascus, Sana’a, Taiz, and all the Palestinian cities experience similar difficulties. All the countries of the region maintain a chronic deficit in their foreign trade in food commodities. Point source and non-point source pollution impair the usability of water resources in several of them. Irrigated agriculture in the recharge areas of aquifers degrade the water quality in those aquifers; and untreated or marginally treated municipal wastewater imparts negative impacts on the quality of ground water and surface water resources in which the effluent is discharged. The region has few, but important rivers: the Tigris, the Euphrates, the Shatt El Arab, the Orontes, the Jordan and the Nile. Despite their importance to the livelihood of the people in the riparian countries, none of them has an all-inclusive riparian agreement. The region further has multiple trans-boundary ground water aquifers, both renewable and fossil, and none of these aquifers has a riparian agreement to specify the share of each party, the rates of withdrawal, the joint management, and their protection. 2.
DISTORTIONS IN THE WATER MARKET
Perhaps the toughest water challenge to cope with in the Middle East are the distortions of the water market place. There are gross imbalances in the population-water resources equation; distortions caused by government treasury subsidies embedded in the water tariffs; discrepancies between the real water cost and the affordability of the economy to meet it; distortions in the income distribution patterns; discrepancy between consumption patterns and sustainability of water resources; discrepancy between good neighborliness and practices on international water courses and trans-boundary aquifers; and, among others, the persistence of deficits that all the region’s countries face in their foreign trade in food commodities. 2.1
IMBALANCES IN THE POPULATION - RESOURCE EQUATION
This is an expression of the demand-supply relationship. The supply side is a function of the water resources availability, and the demand side is a function of the
291
population, their consumption habits and of water management. The demand side can be linked to the degree of economic and social progress that impact the standard of living, the water-use efficiency, management skills, and system of governance. On the population side of the equation, a basic factor in water demand, its natural and involuntary increase, coupled with improved health care and standards of living, is boosting the demand for water to unprecedented scales. Such a challenge exists in Palestine, Jordan, Yemen and, to a certain extent, Lebanon. The spatial distribution of population in practically all the countries of the region has been shifting to higher intensities in urban areas resulting in higher demands for urban water and wastewater collection and treatment. This shift resulted in the diversion of freshwater resources, traditionally used in irrigation, to municipal uses. Amman, the capital of Jordan, Damascus and Beirut provide living examples. Improved living conditions, particularly in urban areas, are boosting food consumption per capita, thereby increasing demand for irrigation water. On the resource side of the equation, several factors are decreasing the water supply: droughts and inter-annual variability of rainfall with the shortage of storage facilities; the degradation of water resource quality due to pollution by municipal and industrial wastewater; the loss of cultivable land to urbanization and the intrusion of farming into natural pastureland; and the erosion of riparian shares in international waters. The impact of the rainfall variations that are inherent in cultivating the rain-fed lands and thus reducing the production of winter crops, primarily cereals, and in the cost of building dams, if topography and geology allow, with storage capacities sufficient for longer term storage. Multi-year storage darns exist in Egypt (the High Aswan Dam), and in Jordan (Wala and Mujib dams, and King Tala1 Dam). In Israel, only Lake Tiberias provides surface water storage, and it runs over in years of good rainfall but falls below high level in lean years. Quality degradation has been experienced in both ground water and surface water. Jordan provides a living example in the Zarqa River basin, and the Amman aquifer; and Syria provides another in the Barada basin and the Ghuta aquifer. Mismanagement of ground water aquifers through over-pumping has resulted in serious declines of water tables and the subsequent flow of brackish water and seawater into the aquifers. Examples of seawater in-flows exist in Israel and in the Gulf States. Declines of ground water tables have been witnessed in Saudi Arabia, Jordan, Syria and Yemen. The loss to urbanization of arable land that holds soil water is a problem common to such countries as Yemen, Israel, Jordan, Syria and Lebanon. Soil water, held in the soil through capillary action, is responsible for the support of rain-fed agriculture and natural pastures. Cities have been expanding by eating up cultivable lands. Examples are Amman, Irbid, Madaba, Rabba and others in Jordan; Damascus in Syria, and Beirut in Lebanon. While natural growth is a contributor to the urban expansion, the movement of people to urban areas, voluntarily and involuntarily, has been the primary reason behind this problem over the past half-century. The irrigation water equivalent of rain-fed lands in the region is in the order of 200-250 millimeters (mm) of irrigation water". A loss of 1000 hectares to urbanization entails the loss of 2-2.5 million cubic meters of irrigation water per year. The loss of pastureland has been primarily due to the intrusion of marginal fanning and its water cost is about 25
292
The loss of 1000 hectares of pastureland entails the loss of 0.25 million cubic meters per year. The contribution of soil water to rain-fed agriculture is about 2000 x p where p is the per capita share of agricultural rain-fed land that receives rainfall of an intensity in excess of 350 mm per season, measured in hectares. The contribution of natural pastures is about 250 h where h is the per capita share in hectares of natural pasturelands (ref. 5) below. The erosion of water shares in international watercourses and aquifers all contribute to the reduction in water supplies. The lack of treaties between the riparian parties incites instability in water shares on the one hand and increases the chances for polluting the watercourses on the other. Examples of the first exist on the Euphrates and the Tigris where Turkey, the upstream riparian, is expanding in its uses of the headwaters to the dismay of the downstream riparian parties, Syria and Iraq. Examples of the second are seen on the Yarmouk where wastewater from Syrian towns eventually find its way to the river bed. The case of shared aquifers is shown in the case of the Yarmouk where Syrian farmers take more water than allocated to Syria under the treaty between the two countries from the aquifer feeding the Yarmouk base-flow. More examples are visible in the aquifer feeding the Qweiq River and the Khabour River, both tributaries of the Euphrates, where Turkish farmers have been taking groundwater from the aquifers feeding these two rivers to the extent of eliminating their base flows. On the Jordan, Israel has been using the rightful shares of Lebanon, Syria and Palestine, a matter that has to be tackled in the peace negotiations. As regards the Nile, only Sudan and Egypt signed a treaty in 1959 although the other riparian parties are now engaged in a dialogue under the Nile initiative sponsored by the World Bank. All the above factors contribute to the exacerbation of the imbalance in the population-water resources equation. The demand side is determined by the need to meet the municipal, recreational, industrial needs of the country under consideration, and the need to grow food. The demand for water extends to meet the requirements for other purposes such as environmental needs, transportation and power generation. Analysis done by the author for water demand'", excluding power generation and transportation, adopted the categories of countries as defined by the World Bank" and showed that the water needs per capita in each income category increases as the income decreases. This is because of the water conveyance, distribution and use efficiencies that the various categories can afford to attain. More recent research by the author updates that analysis and accounts for the agricultural demands and the demand for industrial and municipal needs". The analysis projects the demand based on a "virtual environment" in which a given country is capable of producing the goods and commodities it needs. Such demand is shown in Table 1 below for the four income categories of world countries. In this updated analysis, agricultural water need increases as the income decreases, and the M&I water need is proportional to the income. The amount of water needed to meet the requirement per capita is hereby referred to as the "water threshold". The demand level shown in Table 1 for the various countries represents the water threshold in each. 111111.
293 Table 1: Water demand per capita, or Water Threshold, (M3/year). Category High Income Upper Middle Lower Middle Low Income
M+I 100+260=360 85+165=250 75+125=200 55+ 65= 120
Agricultural 940 1250 1500 1780
Water Threshold 1300 1500 1700 1900
A society is strained when water demand outstrips supply, or when supply is below the water threshold. Water strain, defined as the ratio between the water deficit and the water threshold, is eliminated when supply matches demand. The imbalance in the population-water resources equation is best displayed by the water strain sustained by each country. Negative strains indicate water surplus. Table 2 shows the water strain in the countries of the region; water availability takes into account the surface and ground water, the soil water that exists in agricultural soils and pastures. Downward adjustments are made to the water threshold to account for input of fisheries from the seas. 3.
THE ROLE OF SHADOW WATER I N WATER MANAGEMENT
By virtue of the supply-demand situation, water prices should be high because demand outstrips supply by wide margins in the majority of countries in the Middle East (see water strain in Table 2). Despite the gross imbalance in the population-water resources equation, one does not see crisis level situations in the countries of the Middle East. The credit goes to the import of food commodities thus closing the deficit in agricultural water. Municipal and industrial water deficits are managed through rationing, and subsidies mask the seriousness of the situation. In the oil rich countries, the needs of municipal and industrial water are met through augmentation with desalinated water. The imbalance in the water equation is practically shifted to a parallel imbalance in the trade of commodities, and the municipal and industrial water shortage results in less sanitary conditions, social stress and reduction in domestic industrial production. The agricultural and industrial water deficit that is closed by food and industrial imports is hereby termed as “shadow water”. Since weather conditions and technological capabilities in any given country would not be suitable for the production of all the necessary food and industrial commodities, one would have to create a virtual environment that allows the domestic production of all the needed commodities. The term “virtual water” has been used since 1996 to refer to water “embedded” in food imports, a notion that points to the water needed in the exporting country to grow the exported food. The term has been contested in the water literature and the author prefers to use the term “shadow water” which is defined as the indigenous water resources that would have to be used (in a virtual environment) to produce the imported commodities”’. Quantities of shadow water do not necessarily equal the quantities of exogenous water used in the exporting country; rather, shadow water is the reflection, or the shadow of that exogenous water.
294 Only one country in the region, Iraq, can attain strain free status. However, all the countries, including Iraq, are witnessing deficits in food trade, indicating that they should improve the water-use efficiency and the management of their rain-fed cultivable land. The same countries face a deficit in the trade of industrial products as well. While this is understandable, its impact on the foreign exchange expenditure has been the reason behind regulating industrial imports through various tools inconsistent with free trade. As the world enters an era of globalization and free trade, it becomes more important for countries of the region to maximize the productivity of their water resources and compete on the world markets to earn foreign currency. In this regard, shadow water proves to be a strong competitor to indigenous water even when the social gains from indigenous water utilization are added. Sound water management requires that shadow water be added to the water stock of their countries, and that all options of water allocations be assessed and weighed when their domestic water allocation and reallocation is made. Shadow water is an important comer stone in the water policy of all the countries of the Middle East, and is a mechanism for globalization of water resources. 3.1
DISTORTIONS IN THE WATER TARIFF
Water projects in the region have been implemented and operated by the government or by public institutions under its umbrella. Agricultural landowners are issued permits to drill tube wells and extract ground water for irrigation purposes. The developer, government or private, pays for the cost of development, operates the project and carries the operation and maintenance costs. To help implement the projects, subsidized credit is usually extended to the landowners by domestic institutions, and concessionary loans/credits are extended to the governments by international lending institutions to Lower Middle and Low Income countries. Despite conditions stipulated in international loans and credits, the operation and maintenance cost is not l l l y recovered, nor is the capital cost, with very few exceptions"". This distortion in the water market is not conducive to attaining high efficiencies on the part of the users. However, in some countries of the region, consumption is curtailed through water rationing for lack of resources. Examples are clear all over Jordan and Palestine. Major cities in Yemen face the same problem. Damascus is witnessing municipal water shortages. In Jordan, rationing extends to the agricultural water in the Jordan Valley. However, even with the rationing systems, subsidies rendered by the government treasury are market distortions and should be eliminated to assure a performance of free markets. The subsidies to the water sector are not peculiar to the countries of the region. They are spread all over the world, especially in advanced economies. While municipal and industrial water tariffs may be set to recover actual cost, the agricultural water tariffs are not. The reasons are social, political and strategic considerations. Even where water
295
country
Water(a) Threshold 1300
Water Availability 282
Water Strain
Shadow Water(b) 1018
5 5 or or 55oror Bahrain
0.783
Source: Reference 5 above. (a) Adjusted downward to account for contribution of fisheries from the salty seas. (b) Adjusted for availability of desalinated water.
is virtually free, as in the case of rain-fed agriculture, farmers are subsidized in countries of the advanced economies. Due to the population densities in urban areas, and the tendencies on the part of water managers to favor supply management in lieu of demand management, more and more water resources have had to be made available for use in urban areas. Freshwater used in agriculture close to urban areas was diverted for urban use with adverse environmental and social impacts. More remote sources were tapped and brought in to meet increasing demands of urban areas. Since water is a generator of wealth, the diversion of these remote resources entailed a parallel skewed distribution of wealth. Examples exist in the case of Amman and Irbid, Karak and Tafieleh “,Jordan; Aleppo in Syria; Abha, Riyadh, Breidah and Hayel of Saudi Arabia, and others””’. The real cost of the water supply from remote areas is high compared to the per capita income in all of these countries. If a portion of the annual income of 2% is set for domestic water as per the recommendation by the World Bank,then the cost that a Jordanian’”,for example, can afford to pay for water is about $30 equivalent per year. The cost of supplying water to Amman from the Jordan Valley averages $0.80 per cubic meter, which means that a resident of Amman can afford to pay for 37.5 cubic meters per year before distribution in the city. With 55% unaccounted for water, and $0.2 per cubic meter distribution cost, the rate of consumption within the affordability limit would be 13.5 cubic meters per capita per year, a very modest amount indeed. When the spectrum of income distribution is
296 examined, the share that the lower income people can afford to pay for is much less. The situation in the Palestinian territories is much harder. The method that is followed to enable the lesser income population access water and wastewater charges is to build a tariff structure that employs cross-subsidies and sets them to help the lower consumption brackets only, as is being done in Jordan. Another way is to improve operational and distribution efficiencies by replacing the old networks, responding to repair calls, and minimizing the manpower per unit flow. One way of achieving that is to adopt management contracts whereby the operation and maintenance chores are contracted out to private specialized companies. However, with all these measures, the cost of delivery of municipal water, both capital and operation and maintenance, is beyond the affordability limit of Jordanians with the current strength of the economy. Higher rates of economic growth should be achieved to improve the affordability level of the countries of Low and Lower Middle Income categories.
4.
ADDITIONAL CHALLENGES
4.1
LINKAGE BETWEEN WATER, ENERGY AND THE ENVIRONMENT
Water is strongly linked to energy and the environment. Waterfalls have been utilized to generate energy since ancient times. Energy is utilized in modem times to generate sweet water from salty water through various methods. As a matter of fact, rainfall, the source of all renewable naturally occurring water, originates in the evaporation from the sea by solar energy, and the clouds thus formed are driven to the land by wind energy. Today, it is hardly possible to use water without the need for energy to drive the pumps. The slogan “Water for Energy” and its inverse, “Energy for Water” is true and valid. The water-energy linkage has environmental concerns. Water has to be protected against environmental degradation, and it is needed to maintain a clean environment. The slogan “Water for the Environment” is true and so is its inverse, “Environment for the Water”. Dams built to retain water and generate power have adverse impacts on people and fish; desalination of salty water produces brines of high salt content that should be safely disposed of. After all, no life is possible without water, and life forms are the essence of environmental concerns. There is no life without energy either, water and food provide the energy for cells of life to operate, and water washes the cells of human and animal bodies. Water managers have to pay close attention to the linkages with energy and the environment for the sake of efficient water service, protection of water resources, and for mitigating the negative impacts of water projects. 4.2
MANAGEMENT OF GROUND WATER AQUIFERS
Groundwater aquifers are mismanaged all over the Middle East. There is hardly an aquifer that is not over-pumped. Several aquifers are contaminated by return flows of agricultural drainage water from farms developed within the recharge areas of aquifers. Two things could be pursued:
297 a) Enacting proper legislation to regulate the pumping from aquifers and to implement the legislation faithfully. Cases of users exceeding the pumping rates specified in their permits are abundant in Jordan, Syria and Yemen. b) Licensing for well drilling and surveillance on drilling rigs and operations. Rigs drilling wells under the cover of the night in defiance of legislation roam around freely in such countries as Yemen and Syria. Jordan is putting the squeeze on them. 4.3
TECHNOLOGY TRANSFER
Modem technology improving the infrastructure and its operations should be pursued and imported to maximize water use efficiency. Agricultural technologies should be transferred as well to improve the productivity per unit flow of water. Knowhow to improve quality of products and to exercise quality control needs to be mastered to improve competitiveness of indigenous products in their domestic and export markets. 4.4.1
EDUCATION AND TRAINING
While the slogan “Water for the People” is true and valid everywhere, a reverse Slogan, “People for the Water” is equally true and valid. Skilled people are needed to manage water resources and to operate the water systems competently. Modem technology puts new equipment on the market: control facilities, water saving devices, and the like, and water managers and operators, including users, have to be made aware of these developments. This can only be done through carefully designed educational packages and extension programs. Continuous education and on the job training for water managers and users are crucial to the Middle East environment, more so than other places where aridity is not so much of a problem. 4.4.2
PROTECTION AGAINST SABOTAGE
Water is a popular solvent. It has to be purified of contaminants before it is used for domestic purposes. Its systems are vulnerable to sabotage either by blasting to incur damage to these systems, or by harming the consumers through the injection of materials harmful to the human and animal health. Surveillance systems for security of the systems and safety of the water to use for the intended purpose should be set up, more so these days than before in light of the war on terrorism. It is important to note that aquifers can be poisoned by throwing poisonous substances in a tube well, which could also be the case with surface water. Detection field equipment and operators should be assigned to test the outflow from water resources. CONCLUSION Distortions in the water market place constitute the major challenge in water management in arid and semi arid regions. Shadow water, the reflection in the importing
298
country of exogenous water used in commodity exporting countries to produce the exported commodities, plays a significant role to attain water equilibrium between water demand and supply in the arid and semi-arid countries. Soil water, responsible for the support of rain fed agriculture and natural pastures, is a significant component of water resources availability and should be counted in the supply side of the water equation. The water situation in the countries of the Middle East is analyzed with their water threshold defined. All of these countries, with the exceptions of Iraq and Syria, rely on shadow water for their water equilibrium. Industrial and agricultural shadow water has been calculated for each of these countries, and the water strain in each defined.
’ Israel, Qatar, the United Arab Emirates, Bahrain, and Kuwait belong to the High Income Category; Saudi Arabia, Oman, and Lebanon belong to the Upper Middle Income category; Jordan, Syria, Iraq , Egypt, and Palestine belong to the Lower Middle Income category; and Yemen belongs to the Low Income category. Visit li~://www.worldbank.org/data/countrvclass/class~ouDs.h~ I‘ Munther J Haddadin, “Water Issues in the Middle East- Challenges and Opportunities,” Water Policy Journal, Volume 4,2002, pp 205-222. ”’ Ibid. iv Munther J Haddadin, “Exogenous Water: A Conduit to Globalization of Water Resources,” Proceedings of the International Expert Meeting on Virtual Water Trade, Delft, The Netherlands, 12-13 December, 2002, Research Report Series No. 12, Edited by A. Y. Hoekstra, IHE Delft, February 2003, pp. 159-169.
” Munther J. Haddadin, “Shadow Water in Lieu of Virtual Water,” a paper to be submitted to the Prince Sultan Bin Abdul Aziz Prize, September 2003 Munther J. Haddadin, “Shadow Water: Significance and Quantification”, a paper submitted for publication in Water International, March 2003. vlt Jordan levies water tariffs that recover the operation and maintenance cost and part of the capital cost. viii Water is pumped to Amman from the Jordan Valley over a total dynamic head of 1350 meters, and to Irbid from the Jordan Valley over 650 meters. Municipal water is pumped to Aleppo from the Euphrates. It is pumped to Abha-Khamees Msheit towns on the mountains of Asir from a desalination plant on the Red Sea coast in Tihama. Riyadh gets water from a desalination plant on the Gulf, and so on. The per capita income in Jordan is around $1500 equivalent per year. “I
SEAWATER INTRUSION INTO THE GAZA COASTAL AQUIFER AS AN EXAMPLE FOR WATER AND ENVIRONMENT INTER-LINKED ACTIONS
S. SOREK Ben-Gurion University of the Negev, J. Blaustein Institutes for Desert Research, Institute for Water Sciences and Technologies,Environmental Hydrology & Microbiology; Mechanical Engineering, Pearlstone Center for Aeronautical Studies, Beer Sheva, Israel V. BORISOV Ben-Gurion University of the Negev, J. Blaustein Institutes for Desert Research, Institute for Water Sciences and Technologies, Environmental Hydrology & Microbiology, Israel A. YAKIREVICH Ben-Gurion University of the Negev, J. Blaustein Institutes for Desert Research, Institute for Water Sciences and Technologies,Environmental Hydrology & Microbiology, Israel A. MELLOUL Water Commission, Hydrological Service, Jerusalem, Israel. S. SHAATH Water Resources Consultant, Gaza City, Gaza Strip. ABSTRACT The Gaza strip coastal aquifer is under severe hydrological stress due to overexploitation. Excessive pumping during the past decades in the Gaza region has caused a significant lowering of groundwater levels, altering, in some regions, the normal transport of salts into the sea and reversing the gradient of groundwater flow. The sharp increase in chloride concentrations in groundwater indicates intrusion of seawater and/or brines from the western part of the aquifer near the sea. Simulations over cross-sections and horizontal planes were conducted concerning the problem of saltwater intrusion in the Khan Yunis portion of the Gaza Strip phreatic coastal aquifer. The latter simulation approach is of particular interest when assessing the effect of different regional pumping scenarios on groundwater level and its quality. After calibrating the models for aquifer parameters and boundary conditions, we investigated predictions resulting from various pumping scenarios using the actual pumping intensity from the year 1985 and extrapolating on the basis of 3.8% annual population growth. Results show a considerable depletion of groundwater level and intrusion of seawater due to excessive pumping. The saltwater intrusion due to excessive pumping is only one aspect to consider in the inter-linked actions associated with water and environmental disciplines. We show a
299
300 block diagram depicting the inter-relations between water and environmental disciplines that can be investigated in terms of optimizing the decision-making processes. MODELING OF THE VARIABLE DENSITY FLOW REGIME INTRODUCTION During the last decades numerous models have been elaborated to simulate density dependent water flow and solute transport in porous media. These concern sharpinterface models (e.g. Bear and Dagan 1964; Bear 1972; Bear and Vermijt 1987) and miscible advective-dispersive solute transport models (see survey in Bear et al. 1999). Two-dimensional (2-D) vertical cross-section models (e.g., Voss, 1984; Sanford and Konikow, 1985) do not simulate the effect of different punpinghecharge scenarios distributed over the horizontal plane. The three-dimensional (3-D) models (e.g. Huyakom et al., 1987; Sauter et al., 1993; Gambolati et al., 1999) allow the most rigorous formulation of the saltwater intrusion problems. However, some difficulties can be encountered when dealing with the simulation of large-scale aquifers. These are, for example: the data availability to perform model calibration; the numerical problems associated with the oscillations and numerical dispersion when solving an advectivedispersive equation; the excessive computer time consumption arising from large number of grid nodes or blocks. Our objective when developing the 2-D horizontal plane variable density flow and transport model (Sorek et al. 2001), was to account for the hydrodynamic dispersion and yet not to be subject to the aforementioned problems associated with 3-D models and, unlike the 2-D cross section, to allow areal simulation for regional water management. The MEL2DSLT code (Sorek et al. 1999,2000) was based on our previous development (Bear et al. 1997) concerning the Modified Eulerian - Lagrangian (MEL) concepts and on averaging the 3-D governing equations, along the vertical direction, over the saturated zone (Sorek et al. 2001). APPLICATION TO THE GAZA STRIP COASTAL AQUIFER We had first verified the performance of the MEL2DSLT code for 2-D horizontal phreatic aquifer against the SUTRA code (Voss, 1984) which simulates the saltwater intrusion problem for an unsaturatedsaturated cross section (i.e. averaged along the coastal line direction). To overcome these differences we imposed flow boundary conditions that generated a 1-D (horizontal, inland direction) flow and transport (Sorek et al. 1999, 2001). To enable the comparison for a phreatic aquifer, we used the pressure distribution obtained by SUTRA, and found h, the groundwater level at the elevation where pressure disappears ( p = 0). We also averaged the concentrations obtained by SUTRA over the thickness of the saturated zone. Figure l a demonstrates a good concordance between the groundwater levels simulated by the SUTRA and MEL2DSLT codes. The averaged concentration profiles produced by MEL2DSLT also prove to have very little deviation from those obtained by SUTRA (Figure 2). As demonstrated by the solution obtained with SUTRA (Figure 2), for a vertical cross section, a significant gradient of concentration is noticed. We note, that the estimated saltwater depth follows the 0.5 isoconcentration line.
301 a) boundary groundwater level h=0.5 m at y=500 m
a) The flow simulation
-20 N
-60 -80
Y? m b) boundary groundwater level h = l S m at y=500 m
"
0
100 200 300
400
500
0 -20
Y>m
b) The transport simulation
E,40 N
-60 -80
0
50
100
I50
200
250
300
350
Y> m
Y> m ___
SUTM MELZDSLT
~
............. . . . o . . . e....
Figure 2. Concentrationprofiles obtained by SUTRA in a vertical cross section after 50 years and the MEL2DSLT estimate of the saltwater depth.
boundary groundwater level at y=500 m, h=1.5 m
SUTM boundary groundwater level MELZDSLT at y=500 m, h=0.5 m
Figure 1. Comparison between simulations with SUTRA and with MEL2DSLT, after the time period of 50 years.
We then investigated the seawater intrusion problem into the coastal aquifer of Gaza strip, focused on the Khan Yunis region (Figure 3). The spatial distribution of the pumping wells at the Khan Yunis area is delineated in Figure 4. The model was calibrated using hydrogeological information during the years 1985 to 1991, obtained from a rectangular domain, limited from the North-West by the coastal line from strips 83 to 86 (Figure 3) and stretching 8 km inland North-East. Actually, Figures 3b and 4 demonstrate characteristicsfor which we developed the MEL2DSLT code, i.e., to enable a 2-D horizontal plane assessment of different regional stress scenarios. In the case of a multi-layered aquifer, we address the possible discontinuities in fluid velocities (and steep gradients of the aquifer parameters) by the MEL algorithm. Note, however, that with the current MEL2DSLT code, such discontinuities are accounted for only through the 2-D horizontal plane.
302
u 048 (5) A-A'
~~
080
070
'
Study area Hydrologic cell Density of wells per square km in some hydrologic cells Line of hydrogeologic, section
090'
b)
A'
A
3 N-W
S-E
Hydrogeological section at strip nr. 85
:
1201
-0
I
n -120
'
Clay and shale Well Subaquifer
I
-
I
I 0
I
2
3
4
5 6 7 8 9 10 Distance fiom the sea coast (h)
11
Figure 3. General view and a typical hydro-geological cross section.
I
T
0
303 Wells distribution at the Khan Yunis region
1
°
7
0
0
O
0
0
W
OOO
0 0
ogO%o
8
OO
0
O
O
O
O 0 0
0 0 0
0
0
0
ot
0
00 0
0
B0
O
0
0 O 0 0
0
0
~.
2
I
I
3
4
y=o corresponds to strip 83 0
0 l o
I
I
O
6 7 Distance from sea, km 5
I
8 --I)
b
I
9
10
2c
--o 4
Pumping wells Pumping wells stipulated as injection wells after year 1996
Figure 4. Stipulated management and boundary conditions at the Khan Yunis region. We chose to calibrate the model using information obtained from a rectangular area, limited from the North-West by the coastal line from strips 83 to 86 (Figure 3a) and stretching 8 km inland North-East. This area (8*8 km) was divided by rectangular cells 200 m * 200 m each, altogether 41*41=1681 nodes. Information regarding concentration values was scarce and not reliable. We therefore calibrated the model using mainly measured groundwater levels. These exist for the years 1985 to 1991, as well as values of pumping and natural replenishment. This data was not provided for all wells and were not observed at the same time. To overcome these, we chose a time step of one month to represent the diversity
304 of observations at different times. Altogether 12 (month) * 7 (year) = 84 files were completed. These can be considered as "slices" of a random field of groundwater levels in 84 time points. Information concerning the random field of the groundwater levels was assembled into 84 files, each represents the distribution of the groundwater levels in a particular month from the years 1985 to 1991. We eliminated an inherent spatial plane distribution of a linear time trend in the random distribution of the groundwater levels (e.g., due the anthropogenic effects prevailing in the study area). We then evaluated the spatial distributions of the mean of the initial groundwater levels and those of the Chloride (CI) concentrations, at the beginning of the year 1985. Dirichlet conditions for the hydraulic head and for the C1 concentration were assigned at the boundaries. These were the evaluated mean values based on observations in time along the boundaries. For the sake of obtaining the estimated groundwater levels, simulation was conducted with: specific yield S,=0.3, porosity4 = 0.35. For the averaged specific discharge we used longitudinal dispersivity UL= 25 m, transversal dispersivity ar = 0.5 m, apparent dispersivity (Sorek et al. 2001) U A = 2.7 m. We also accounted for, CJ,= 110 mg/kg, an observed regional average annual C1 concentration associated with natural replenishment. The comparison between the observed groundwater level values for 1985 and 1991 and the simulated ones is depicted in Figure 5. The resulting concentration distribution is compared to those observed (the latter is situated only from 800 m inland) and delineated in Figure 6. a) Khan Yunis, December, 1985
Distance from sea, m
____
b) Khan Younis, December, 1991
Distance from sea, m observed mean head values simulated 1st moment for head
Figure 5. Samples of calibration maps of groundwater levels. Initial distribution of groundwater levels and concentrations were estimated by steady state simulation without source/sink stresses. With the obtained initial conditions, we implemented various pumping scenarios (Figure 7) based on the actual pumping intensity from the year 1985 that was extrapolated on the basis of 3.8 % annual population growth.
305 The stipulated pumping scenarios were the driving forces for the predicted regional distributions of groundwater levels (Figure 8a) and chloride concentrations (Figure 8b). We note that groundwater levels during the next decade will be considerably depleted. This will induce further seawater intrusion into the coastal aquifer. Khan Yunis, year 1991
1000
2000
3000
4000
5000
6000
7000
8000
Distance from seam __.......~. Observed
Simulated
Figure 6. Observed distribution of concentration values compared with those simulated.
"0
Idw
Figure 7. Scenarios of the total pumping intensities for the Khan Yunis region.
2000 3000 4000 5000 6dw 7daO 8daO 9daO l O h 4 O i l h 4 0 Dir,ancc fran S C b rn
F O EorrcrQOndr to stnp 83 A)
0)
Pumpinginmcaser during 1950 to 2006 with a rate of 3 8Ydyear
~
~
..- Pvnping incrcaes duing I950 lo I996 with a m e af 3 Wdycar and rcrnaina conrtanl duting 1997 lo 2W6
CI
D) - - -
pumpingmmarcsdunng195010 1996with araleof38Ydyycar and r e m i n e wnsmt dvnng 1997 10 2006 wvcyvhcrr.except at a 2 h w i p fmm thc sea. whsrc tf is shut Pumping a in care (C). and additional injection of fmsh waler ar400-600rnittipfromLh~seadvnng IW7L02006
Figure 8. Predicted (a) groundwater levels and (b) Chloride concentration (mdkg) resulting from different pumping scenarios.
306 RAG " ranking AND SELECTION OF WATER AND ENVIRONMENT INTERLINKED ACTIONS The problem of saltwater intrusion, its effect on groundwater and the environment is only one aspect of general inter-linked water and environment issues.
Figure 9. Map of Water and Environment Inter-linked Disciplines. The map of water and environment inter-linked disciplines, as depicted in Figure 9, actually presents a multicriteria environment. In view of the ongoing process of depletion of fresh water resources, we maintain that the hydrology of the next millennium should rely on the central theme of optimal water management (see Figure 9) in terms of quantity and quality.
307 The map delivers the notion that different water related development alternatives are affecting the environment. The choice of selecting action scenarios, in view of different criteria, is a decision task of different hierarchy levels. Decision making processes rely in certain cases on non-unique definitions of the subject to be evaluated. Such a situation is termed “fuzzy” (Saaty, 1978), i.e., a non-specific question may generate a fuzzy set of data that can mislead the decision-maker. A possible approach to minimizing the fuzziness in the system when evaluating alternatives, Saaty (1981), determines the system by constructing hierarchical branching criteria common to all tested options. Other approaches for ranking alternatives in a multi-criteria situation rely e.g. on summing grades using relative weights associated with hierarchical structure of the criteria (Hwang and Yoon, 1981) or adopting criteria ranking without using quantitative values (Hwang and Yoon, 1981; Crama and Hansen, 1982). Determination of the importance and grades of the criteria can be achieved by: 1) using the minimum and maximum values as the basis of selecting a grading scale; 2) fixing an alternative as the one with the optimal criterion and comparing the others in relation to it; 3) comparing alternatives and criteria pairs involving graded scales and eigen vectors (Saaty, 1974; Hwang and Yoon, 1981) and 4) adopting a grading approach to each given criterion which will lead to a normalized scheme composed of several ranking methods. Once the ranking of the different alternatives is established, one may generate different optimal paths (i.e. choosing the order of a set of blocks as in Figure 10) fiom an initial development activity to a prescribed activity goal. CONCLUSION Simulation of the seawater intrusion problem into the Khan Yunis section of the Gaza Strip Coastal aquifer was executed with an areal variable density model, enabling the evaluation of different regional water management scenarios. Simulations were based on the actual pumping intensity from the year 1985 and extrapolating on the basis of 3.8% annual population growth. Results show a considerable depletion of groundwater level and intrusion of seawater due to excessive pumping. A block diagram was introduced to present the inter-relations between water and environmental disciplines. This describes the various actions facing the decision-makerwhen choosing an alternative for water development scenarios in a multicriteria situation. A discussion was presented on some ranking possibilities aiming at quantitative decision criteria. After establishing the different relative ranking between the water development scenarios one may proceed by developing an integrated decision support program that can yield optimal path from a prescribed initial activity toward a determined activity goal.
REFERENCES 1. 2. 3.
Bear, J. (1972), Dynamics of fluids in porous media, h e r . Elsevier publishing Company, New YorWLondodAmsterdam. Bear, J. and Dagan, G. (1964), Moving interface in coastal aquifers. ASCE Journal of Hydraulic Division, 90(HY4), 193-216. Bear, J., and Vermijt, A. (1987), Modeling groundwaterflow andpollution, 414 p. D. Reidel Publish. Company, Dordrecht.
308 4. 5. 6.
7. 8.
9.
10. 11. 12. 13.
14.
15.
16. 17.
Bear, J., Sorek, S. and Borisov, V. (1997), On the Eulerian-Lagrangian formulation of balance equations in porous media, Numerical Methods for Partial Differential Equations, 13(5), 505-530. Crama, Y. and Hansen (1982), An Introduction to the EIctre Research Program, Essays and Surveys on Multiple Criteria Decision Makmg, Universite De Liege, Belgium, Spriger Verlag, 3 1-41. Gambolati, G., Putti, M., and C. Paniconi, (1999), Three-dimensional model of coupled density-dependentflow and miscible salt transport in groundwater, in: Bear, J., Cheng, A.H-D, Sorek, S., Herrera, I. and Ouazar, D., (eds.), Sea Water Intrusion in Coastal Aquifers - Concepts, Methods and Practices, Kluwer Acad. Publishers, Dordrecht'BostodLondon. Huyakorn, P.S., Andersen, P.E., Mercer, J.W., and White, H.O., Jr. (1987), Saltwater Intrusion in Aquifers: Development and testing of a three-dimensional finite element model. Water Resour. Res. 23(2), 292-312. Hwang, C.L., Yoon, K. (1981), Multiple Decision Making Methods and Applications, Springer Verlag, Berlin. Sanford, W.E. and L.F. Konikow (1985), A two-constituent solute transport model for ground water having variable density, USGS Water-Resources Investigation Report 85-4279,41 p. Saaty, T.L. (1974), Measuring the Fuzziness of Sets, J. Cybernetics, 4,4, 53-61. Saaty, T.L. (1978), Exploring the Interface Between Hierarchies, Multiple Objectives and Fuzzy Sets, Fuzzy Sets and Systems 1, 57-68, North Holland Publishing. Saaty, T.L. (1981), The Analytic Hierarchy Process, Mcgraw-Hill. Sauter, F.J., Leijnse, A. and Beusen, A.H.W. (1993). METROPOL. User's guide. Report nr. 725205.003. National Institute of Public Health and Environment Protection. Bilthoven, The Netherlands. Sorek, S., Borisov, V., and Yakirevich., A., (1999), Modijied Eulerian Lagrangian (MEL) method for density dependent miscible transport, in: Bear, J., Cheng, A.H-D, Sorek, S., Herrera, I. and Ouazar, D. (eds.), Sea Water Intrusion in Coastal Aquifers Concepts, Methods and Practices, IUuwer Acad. Publishers, Dordrecht'BostodLondon Sorek, S., Borisov, V., and Yakirevich, A., (2000), Numerical modeling of coupled hydrological phenomena using the Modified Eulerian--Lagrangian method, Theory, Modeling and Field Investigation in Hydrogeology: A Special Volume in Honor of Shlomo P. Neuman's 60th Birthday, edited by D. Zhang and C.L. Winter, Geological Society of America, Special Paper 348, 151-160. Sorek, S., Borisov, V., and Yakirevich, A,, (2001), Two-Dimensional Areal Model for Density Dependent Flow Regime, Transport in Porous Media 43, 87-105. Voss, C.I., (1984), A finite element simulation model for saturated-unsaturated, fluiddensity- dependent groundwater flow with energy transport or chemically-reactive single species solute transport, Water Resources Investigation Report 84-4369, US Geological Survey, 409 p.
10. THE PLANETARY EMERGENCIES: ITALIAN CIVIL PROTECTION
This page intentionally left blank
11. THE CULTURAL PLANETARY EMERGENCY FOCUS ON TERRORISM: MOTIVATIONS
This page intentionally left blank
REPORT OF THE OPEN FORUM DEBATE ON TERRORISM AHMADKAMAL Senior Fellow, United Nations Institute of Training and Research New York, USA 1.
At the 30th Session of the Erice International Seminars, the Open Forum Debate among the participants on different aspects of Terrorism reiterated the Conclusions and Recommendations of the 29 Session held in May 2003, which had focused its attention more specifically on this global scourge.
2.
The participants felt that Terrorism had to be seen in a holistic manner in all of its sub-aspects of Motivations, Tools and Counter-Measures, and World-Wide Stability. While all these three aspects have to be duly addressed, the problem appears to really lie in the relative weight that should be assigned to each, and to remove any inconsistencies that may arise because of greater attention being devoted to any one over the others.
3.
In the past, lack of education and poverty have been cited as significant root causes. An inconsistency appears to exist between the desire to improve the educational access for students from developing countries, and a policy of restricting student visas. At the same time, it appears that poverty in terms of per capita income as such is not as important a direct contributing factor as was initially believed; rather it is dwindling expectations that are creating the conditions in which Terrorism flourishes.
4.
While the great effort put into developing and refining Tools and Counter-Measures has perhaps contained the danger of a rapid surge in terrorist incidents, the financial, social, and psychological costs of those Tools and Counter-Measures have assumed unexpectedly large proportions. The costs of the impediments to commerce and travel, and of the restrictions on civil liberties have yet to be adequately assessed.
5.
There is a general conviction that Tools and Counter-Measures to stop Terrorism cannot succeed without understanding and addressing Motivations. These include known political and historical root causes, perceived injustices, and ideological aspirations, all of which render otherwise normal humans into beings susceptible to the message and methods of Terrorism.
6.
Meanwhile, in developing Tools and Counter-Measures, many essential civilizational values and principles have had to be suspended or abandoned under an understandable sense of emergency. It is to be hoped that this is a temporary phenomenon, and not a retrogressive erasure of principles and values developed over centuries of human endeavour.
313
314
7.
Other than the obvious root causes of political, educational, and social frustrations and despair, an essential new element that has been identified is the need for cultural tolerance and understanding. Unfortunately, the opposite is visible in the noticeable growth of cultural intolerance. Even more disturbing is the fact that this cultural intolerance is gaining respectability even as it replicates itself around the world. In the process this vicious cycle is feeding on itself, and unless this trend is speedily reversed, it will provide a fertile ground for the further spread of Terrorism.
8.
Another disturbing tendency is to over simplify the complexity of the root causes of Terrorism by passing judgment on Islam. Islam is currently undergoing an internal debate among its own adepts, about the importance of moderation and the rejection of extremism, and this is a debate that it will have to resolve. Forcing this debate by military means can only lead to a clash of cultures, which is counter-productive.
9.
While statistical data on incidents of Terrorism or on the development and reliability of Tools and Counter-Measures is widely available, scientific data on the political and territorial root causes, and on the psychological perceptions about frustrations and despair remain meager. This is specific gap that the World Federation of Scientists could try to bridge.
10. By its very nature, the World Federation of Scientists can also promote inter-faith and inter-cultural dialogue and understanding, as has always been the special historic mission of scientists. The participants look forward to fiu-ther initiatives in Erice in this field.
12. ENERGY
This page intentionally left blank
ENERGY IN DEVELOPING COUNTRIES IS IT A SPECIAL CASE? DR. HISHAM KHATIB, FIEEE Honorary Vice Chairman - World Energy Council Amman. Jordan Energy in developing counties is a special case. Its performance statistics and problems are different from those of industrialized countries and have to be treated differently. However, developing countries (DCs) mean a wide spectrum of countries with many divergent circumstances. This paper starts by defining what we mean by a DC and then proceeds to tackle the energy-sustainable development issues facing these countries.
WHAT DO WE MEAN BY A DEVELOPING COUNTRY? It is not easy to categorize countries into industrialized and developing countries. Although all OECD (Organization for Economic Co-operation and Development) countries (probably with one exception) can be termed industrialized countries (ICs), not all countries outside the OECD, East Europe and the former Soviet Union (FSU) states can be termed developing countries. There are a few countries, particularly in the pacific region of Asia, which have been able to achieve rapid growth and attain the status of some OECD countries in terms of economic advancement. These newly industrialized countries (NICs) include Taiwan, Korea, Hong Kong and Singapore. Such countries can no longer be termed DCs. Many international attempts have been made to categorize countries in relation to their national per capita income and human development. The Global Environmental Facility (GEF), which was formed in 1990 by the World Bank and United Nations agencies, and was intended to assist DCs in overcoming environmental problems which have regional and global implications, chose to restrict its assistance to countries with annual per capita income of no more than $4000 at the end of 1989. In accordance with this criterion, developing countries include all countries of Latin America, Africa and Asia (excluding East Europe and the rich oil exporting countries in the Arabian Peninsula, although in many aspects some of these are still developing countries). Still this is a wide spectrum. For the purposes of this paper we shall utilize the World Bank criteria for countries with low income and lower middle income. These are shown in the table below':
gross national income (GnI 2000,5 GNI per capita Low ($755 or less) Lower middle ($756-2,995) Upper middle ($2,966-9,265) High ($9,266 or more) World
Economies 63 54
GNI $ billions 997 2,324
Population millions 2,460 2,048
GNI per capita 410 1,130
38
3,001
647
4,640
52
24,994
903
27,680
207
31,315
6,057
5,170
source world bank atlas
317
318 FEATURES OF DEVELOPING COUNTRIES All DCs have, to some degree, basic common features: incomes barely meeting basic needs with serious financial problems in their attempts to raise the standard of living of their populations. They mostly lack foreign currency and investment funds, and have to borrow or attract foreign investment. They have to import advanced technology and capital goods and most of them also have high birth rates. However, even these features vary from region to region and from one country to another depending on income and national endowments. These differences will not be highlighted in this article, they are dealt with elsewhere: and developing countries will be treated as one group. The main economic and demographic characteristics of developing countries are presented in Table 1. From the Table it is clear that DCs, with around 4450 million in population (year 2000), constitute slightly more than three-quarters of the world's population, living in more than 120 countries. However, their contribution towards the world's generated income is only 13% against 84% for the 900 million in population of the high-income industrialized countries, mainly OECD members. This only demonstrates the huge gap (32:l) in income between the affluent countries of the North and the very modest ones of the South. When these differences are looked at in purchasing power parity (which is a more practical way of measuring real income) the gap in per capita income between North and South is reduced practically to 7:l, which is still very high. This huge gap is also very much reflected in electricity and energy utilization. This article commences by looking at the global energy scene, the value of energy to DCs in their pursuit of sustainable development and their energy requirements. The paper then looks into energy issues in DCs - inaccessibility of electricity, environmental issues, shortage of capital and how to deal with this. ENERGY UTILIZATION IN DCS The intention is not to go into detailed energy statistics here; these are detailed in Table 2. It is enough, however, to mention that all the DCs together consumed 4100 million tons of oil equivalents in 2002. They also produced 5100 TWH of electricity compared to a world production of 15 700 TWH in 2001 as detailed in Table 3. 4'5 Table 2. Global Energy Consumption (MTOE)
or 55or Former Soviet Union
1 Total
9 400
(2) Biomass, etc. Biomass and other traditional energy sources (estimated)
1
Total global energy use (MTOE = million tons of oil equivalent)
1000
I
10 400
319
Table 3. Global Electricity Statistics
Global Electricity Balance (2001)
OECD East Europe DCs World
Population (Million)
Capacity (GW)
Production (TWh)
1020 400
2000 420
9000 1600
Electricity % kWh Per capita 8800 57 4000 10
4580 6000
1160 3580
5100 15700
1100 2600
Fuelsfor electricity generation GW
TWh
Nuclear Hydro Thermal: Solids Oil Gas Other Total
2500 2900 10000 6100 1350 2550 300 15700
360 830 2330 1100 430 800 60 3580
Sources of electricity generation (2) Nuclear (TWh) Hydro (TWh) OECD East Europe DCs World
2000 240 60 2500
1300 290 1310 2900
% of electricity 16 18 64 39 9 16 2
33 100
Total primary energy 5000 1070 3430 9500
Energy (m.t.0.e.) Elec Elec. m.t.0.e. (%) 1970 420
39 39
1060 3450
31 37
Input (rn.t.o.e.) 650 250 2560 1570 290 700 110 3570
Thermal & Other (TWh) 5700 1070 3730 10300
18 % 7% 72 % 44 % 8% 20 % 3%
Total (TWh)
9000 1600 5100 15700
Sources: M A : Energy, Electricity and Nuclear Power Estimates for the period up to 2020 (July 2001). IEA: World Energy Outlook (2000). NOTES: 1. Generation refers to gross generation 2. Capacity refers to net capacity 3. Fuel required for production of electricity is estimated at 255 grm of oil equivalent per kWh for high income and 270 &Wh for East Europe and DCs. 4. Fuel equivalent of hydro generation is equal to the energy content of the electricity generated. 5 . m.t.0.e. = million tons of oil equivalent. 6. Tera-watt-hour (TWh) = 1000 million kWh, Gega-watt (GW) = million KW 7. Global electricity figures are not well documented. The above is a collection of data from many sources with some rounded approximations and estimations. 8. There may some slight discrepancies between energy consumption figures of Tables 2 and 3.
320 THE VALUE OF ENERGY TO DCS The continuing inaccessibility of commercial forms of energy to DCs is no doubt delaying their development and adversely affecting their local environment and their health. In this age of information the presence of electricity in homes is essential to keep up with the times - to track the world. The fact that almost one third of the world population has no access to a dependable supply of electricity is amazing and of course is a cause (as well as a result) of their poverty and of the present global divide between the industrialized countries (18% of world population) and the remaining two thirds of the world population living in poverty or semi-poverty. Electricity in particular has become an important ingredient in human life, essential for modem living and business. Electricity is versatile, clean to use and easy to distribute and control. Equally as important, it is now established that electricity has better productivity in many applications than most other forms of energy. It is not easy to establish exactly how many people have no access to electricity. Statistics vary from 1.7 billion to 2 billion.6 There is probably another one billion whose supply is not reliable enough and correspondingly they have to rely, beside the network, on back-up generation which is terribly expensive. Reliability of the electricity supply is of paramount importance. Interruptions (even transient supply problems) can cause serious financial and economic (social) losses. In any country, electricity shortages have two effects: handicapping productive activities and damage to consumer welfare. From the productivity stand-point, electricity shortages discourage investors by affecting production, increasing its cost and requiring more investment for on-site electricity production or standby supplies. For small investors, the cost of operation is increased, since electricity from small, private generators is more expensive than public national supplies. Electricity interruptions at home also cause consumers great inconvenience, imtation and loss of time and welfare. What is most worrying is that the number of those without access to energy is increasing. At present 2 billion are without electricity and the rate is increasing by 1.4% annually, i.e. there are at least 28 million people added annually to those already suffering from lack of accessibility. Therefore at least 30 million need to be connected annually, out of those without access, just so that their numbers will not increase. This means connecting at least 6 million new homes annually in DCs (besides continuing the provision of services to the increasing population who are already connected). As a matter of fact, and in order to eradicate inaccessibility over the very long term, there is need to provide at least 40-50 million (i.e. almost 7 million new homes) with electricity. This is not an easy; it is a major challenge to global sustainability, which will be dealt with later. ENERGY - ENVIRONMENTALOUTLOOK IN DCS It has to be emphasized that DCs are more interested in day-to-day living and survival than any other concern. Their main environmental concerns are their local existing environmental problems rather than global environmental issues. Therefore things like global warming issues are not high in their priorities. Anyway DCs are only minor contributors to carbon emissions. Most low income DCs are rich in coal (mostly low quality coal) and these countries will continue to use coal as the main source of electricity production irrespective of any other consideration. China's coal utilization in 2002 increased by almost 27%.
321 The health and environmental hazards of a lack of electricity and of reliance on non-commercial fuels are well knownWhen non-commercial energy sources are used in homes and in working places, suspended particulates TSP, S 0 2 , NO2 and CO are commonly emitted. In rural areas of many developing countries, emissions of TSPs and of total volatile organic compounds (TVOCs) of twice the permissible limits have been recorded. It is mainly the absence of electricity and the reliance on low quality and inefficient non-commercial sources of energy that are causing health risks to people in developing countries, particularly children. Most of the people in developing countries live in rural areas. The number of people living in urban areas is increasing rapidly. Until recently, only 15% of the world’s population lived in urban areas. According to the United Nations, half of the world‘s population will live in cities by the year 2007. Urbanization will, of course, facilitate the supply of services to people in low-income countries. This urbanization, however, is taking place through the creation of large informal settlements on the fringes of most urban areas. The proximity of such settlements to urban areas allows people to have limited access to a commercial source of energy, but not to electricity. The availability of electricity is most effective in minimizing health and environmental risks; also, its efficiency is much higher than that of non-commercial fuels. However, few of the low income countries can afford the investments necessary to develop electricity generation and networks to supply the poor. EFFICIENCY, SOCIAL AND PRODUCTIVITY IMPACTS OF ENERGY SHORTAGES IN DCS Most of these low income DCs depend on biomasses in the form of firewood, animal dung, agricultural crops and garbage, paraffin and coal, if available, as their energy sources for cooking and heating. These activities are camed out in a very inefficient way and therefore the utilization of the heat content of these primitive noncommercial fuels is very low. When an open fire is used for cooking, only 5-10% of the heat content of the fuels is utilized, and the health and environmental hazards are large.7In comparison, when gas stoves and ovens are used for cooking and heating, an efficiency of 50% is attained. Because of the high birth rate and the limited area of renewable forests in many developing countries, the amount of available firewood is gradually dwindling, which is creating even more social and environmental strains. Non-availability of electricity is a handicap for the social, cultural and economic development of nations. Electricity in homes is essential for lighting, cooking and heating, for preservation of food, for access to the media and for entertainment through television. Commercial activities are also severely impeded by the lack of electricity, particularly regarding the use of computation facilities. Provision of electricity in developing countries greatly enhances the quality of life. It promotes people’s expectations and motivations, thus assisting in the elimination of poverty. It also improves health standards and assists in education. Most importantly, it helps retard the migration of people from rural areas to cities and enhances the opportunities for income and employment in those areas. It also assists in the preservation of forests and trees, which are currently being cut down because of the lack of electricity and energy sources. Electricity is essential for sustainable development; however, so far, little research on the effect of the provision of electricity on social development has been conducted. More work has been done on costing the unreliability of electricity supplies and its economic and social costs. It has been estimated that the losses due to unreliability of electricity supplies may be as
322 high as 4% of the gross domestic product (GDP) in the short term. For India the cost of power shortages to the industrial sector has been estimated at 1.5% of the GDP and for Pakistan at 1.8%. THE FUTURE Naturally things should not be allowed to continue as they are. The present energy shortages and inaccessibility are poor DCs' major impediment to sustainable development and contribute to the global divide between the haves and have-nots. Poverty somewhere is threat to prosperity elsewhere. Energy provision in DCs can be dealt with on two fronts: technology and investment. TECHNOLOGY Modem technology assists in the provision of energy. It is finding new means, more efficient and more flexible sources and, most important, cheaper for investment. In this paper we will not go into detailed technologies, of supplying energy to DCs but we shall refer to two new facilities: liquefied natural gas for cooking (and heating) and PV and similar renewables for provision of electricity. LNG is most suitable for provision of modem commercial forms of energy to poor DCs. It can be efficiently utilized for cooking, heating homes and occasionally for transport. It is relatively cheap, highly efficient and can be dealt with on individual bases with the involvement of the private sector for bottling and distribution. LNG can be obtained locally or imported. It is most suitable for private sector investment in bottling and distribution. The investment and technologies involved are modest. Orthodox methods for the provision of electricity supplies, i.e. the erection of a central power station with a transmission and distribution network, which is ideal for industrialized countries and urban areas, may not be the most economic means of providing electricity in developing countries, particularly in rural areas, where the electricity demand per consumer is only a small fraction of a kilowatt. Unorthodox, local or individual techniques have to be investigated in order to reach the target of ensuring electricity supply and to enhance human development. In rural areas, small diesel units could be used for many purposes, but these require a continuous supply of fuel and spare parts; maintenance of these sets is also required, but manpower that can provide this is not always available. Long interruptions due to broken diesel units are not uncommon in many countries. It would also be important to exploit mini and micro hydroelectric resources in developing countries if such resources are available. Utilization of photovoltaic (PV) solar cells in households or by small communities is a very important recent development that warrants further attention. Facilities which utilize solar energy need no fuel and practically no maintenance, only a small storage battery. The price of such a facility (10-30 W) for small rural houses, including wiring, is approximately US $600, and the costs are still decreasing. It was claimed that 25% of the rural population without access to electricity could be economically supplied in this way. The same concepts would be applied in rural areas of most low-income, developing countries in Sub-Saharan Africa, South Asia, and Latin America. Supplying electricity to the 2000 million people who have no access to electricity is a major challenge facing every government and electrical utility in developing countries. Large funds are being made available by governments for the
323 development of the electrical system. However, local communities and nongovernmental organizations should also take part in this effort. It would be costly and time consuming to wait until the national grid reaches every village and population centre. There is a need for innovative technologies and also for integrated resource planning on a national level to include electricity supplies in development plans in an optimal way. Most cities in developing countries already have an electricity supply of some sort. Therefore, the most important task in these countries is to supply electricity to rural areas. This poses mainly two types of problems: technical problems and problems connected with the capital required. On the technical side, the best and most economic strategies for providing electricity supplies in rural areas have to be found. No doubt, the best technical solution would be to connect these areas to the national network (or to the system of a nearby city if there is no national network). This may, however, not be economically feasible or technically possible. Therefore, for each country, the specific circumstances and conditions as well as the existing network configuration have to be studied, utilizing the techniques of the least-cost solution. As mentioned above, innovative techniques also have to be considered. In certain areas, connection to the national network may not be possible because of large distances and technical limitations. Therefore, a promising first step would be to develop local systems, i.e. to promote local electricity generation. The next step may be to connect these local systems to the national grid. However, the problems in connection with generation systems (sometimes based on diesel engines) in rural areas are the difficulties connected with the supply of fuel and spare parts and the availability of skilled manpower needed to run the power stations and to maintain them. Therefore, provisions must be made to solve these problems. One way of facilitating the task is to build hybrid systems with diesel engines that are supplemented by other local resources, such as wind energy or mini hydroelectric resources. Also, central maintenance teams for localized systems could be organized to efficiently utilize local manpower. In low income developing countries, for example in many countries in central Africa, it may not be economically possible to adopt central or local systems. In these cases, provision of individual small supply facilities based on PV modules, as mentioned above, may be the only practical solution for providing electricity, in a very modest way, to rural areas. Such modules could be installed in a village central to provide electricity to a clinic, a water pump or a telecommunications system. Alternatively, small PV modules could be used in houses for lighting and for other basic needs. Batteries with a single charging centre could also be used. Such activities could be commercialized in rural areas of developing countries, with participation of private persons and local entrepreneurs. Groups of local entrepreneurs could also start their own local diesel generating systems in villages and sell limited amounts of electricity to the people at a profit, thus filling a gap for some time until it becomes possible to establish a national network. One of the means to facilitate rural electrification is to develop economic network configurations, using local materials if possible, for instance concrete or wooden poles, and to produce network materials, particularly accessories. Standardization of rural networks, with local production of some of the materials, would greatly help reduce the cost of rural electrification. In developing countries, there are other basic needs aside from rural electrification (provision of education, public health, welfare, construction of roads).
324
The role of integrated resource planning is to ensure optimum allocation of the limited capital funds and resources to the different competing uses. The circumstances and means are different in every country; for integrated resource planning on a national level, it is necessary to find the proper level of priority for electrification. Of course energy accessibility demands funds and investments which only a few DCs can mobilize. Assistance from the international community and development funds are essential. Without them electrification and accessibility will become secondary to day-to-day considerations of provision of (more!) essential services and dealing with poverty. International assistance has to take the form of donations. DCs cannot bear further loans and service them. The bulk of aid to these countries should take the form of outright grants not loans, and DC governments must be careful not to fall into debt, which will only trap their development. The amounts of money we are talking about are not huge. Let us aim at a target of reducing the number of populations without electricity by say 40 million per year. That means electrifying 6 million new homes in DCs annually. Cost of such electrification per household will be around $1000 (this involves some funds for community services). It means investment of around $6 billion annually. If we double this figure to allow for provision of other forms of commercial energy (e.g. LNG for cooking) we are aiming at an aid allocation of $12 billion annually. By world standards these are modest sums. The aim of the world aid programmes is to allocate 0.7% of global GNI to development aid in DCs. This amounts to about $ 220 billion annually. What is being actually allocated is less than half this amount. The aid funds between what should be and what is actually allocated is almost $ 120 billion. What we are asking for (improved accessibility to energy) is only 10% of this amount. Due to lack of access to energy, about half of the world is presently outside the information age and will continue to be so unless there is genuine effort to close this global energy divide. Efforts should be both on the national level and on the global level. At the national level, governments of the DCs should develop and exhibit the political will to install electricity and facilitate accessibility to commercial forms of energy to their populations. Political will is most important in this regard. Without this the results (if any) will be mediocre. Accessibility to modem forms of energy, particularly electricity, is a prerequisite for national development. Any sincere DC government must display political will and muster resources to make this possible. Markets are not interested in energy for the poor. Costs per unit of energy are high, risks are above average, correspondingly private investors and markets will not go into this type of investment. This is mainly a government responsibility. DC governments must understand this and its value to their development and muster the political will to make this possible.
325 FEFERENCES 1. World Bank Atlas (2002).
2. Hisham Khatib, "Electricity in Developing Countries", Engineering Journal, August 1998.
IEE Power
3. Hisham Khatib, "Energy Issues in Developing Countries", World Energy Council, 1992. 4. Hisham Khatib, "Economic Evaluation of Projects in the Electricity Supply Industry", IEE Book, 2003, UK. 5 . World Energy Assessment, "Energy and the Challenge of Sustainability",
WEC/UNDP/UN-DESA, 2000. 6. IEA - "World Energy Outlook", Pans, 2002. 7. IAEA - "Conference on "Electricity, Health and the Environment: Comparative Assessment in Support of Decision Making", Vienna, 1996.
SOME PERSPECTIVES ON THE PROSPECTS OF NUCLEAR ENERGY IN THE DEVELOPING WORLD AND ASIA BOB VAN DER ZWAAN Energy Research Centre of the Netherlands (ECN), Amsterdam Belfer Center for Science and International Affairs (BCSIA), Harvard University, Boston, USA ABSTRACT The current use of nuclear energy cannot be called sustainable in attempting to achieve sustainable development, and to establish sustainable energy systems in particular. However, nuclear energy may, for the moment, remain a necessary component. Also, given its dynamic nature, nuclear power could be made into a sustainable energy option in the longer term. If this was the case, developing countries could opt for its inclusion in their national energy policies. In thinking about energy generation and its effective use in the developing world, however, one must be careful not to blindly attempt to copy existing models from industrialised countries. In particular, the high costs involved with the development of nuclear power should be given careful consideration when designing long-term sustainable and affordable energy infrastructures in developing countries that aspire to a path of steady economic growth and higher levels of social welfare. If these countries decide to develop nuclear energy, it is argued, they should not choose a nuclear fuel cycle based on reprocessing, but adopt a once-through fuel cycle instead.
1. INTRODUCTION Nuclear energy remains a controversial issue for public policies on energy and environment because of arguments concerning radioactive waste, reactor accidents, nuclear proliferation and economic competitiveness. The issues of climate change and supply security have provided a new rationale for its reappearance on the international political agenda. Recent national policy directions in some countries show that such a potential comeback of nuclear energy is not just wishful thinking on the part of the nuclear establishment. Because anno 2003 nuclear energy faces stagnation, it is unrealistic to consider it a serious option today for reducing carbon emissions. On the other hand, it would be a mistake to exclude any potential option, such as nuclear power, at this time, for reducing such emissions. Whether or not nuclear energy will play a significant role in the long-term future, all energy technologies - including nuclear ones - ought to be considered in terms of their potential to contribute to goals of sustainable development, including issues related to environmental, economic and social risks, and climate change prevention and supply security support in particular. It is widely recognised that, in addition to other factors, the availability and use of electricity can improve the standard of living in developing countries, and, in fact, may be an indispensable driver behind economic and social progress. Economic growth towards higher levels of wealth in these countries is intimately linked to an increase in per capita production and utilisation of energy. However, in attempting to establish effective, efficient, affordable and sustainable energy production and consumption in the developing world, one must be careful not to automatically transpose to developing countries existing energy technologies, infrastructures and modi operandi as are currently employed in developed countries. The resources,
326
327 technological, cultural, legal and societal differences between developing and developed countries should be carefully considered and integrated in the designing of energy systems in the developing world. The solution to the energy problems of the latter, considered by many as one of the world’s most prominent emergencies, lies in an intelligent, and probably decentralised, approach that takes into account local and regional needs, capabilities and customs. Given the above, the question uppermost is what the future role of nuclear energy will be in addressing the energy challenges of the 21” century in the developing world. This article briefly overviews some of the main issues concerning the prospects for nuclear energy in developing countries and, in particular, presents some perspectives of nuclear development in Asia. In countries such as China and India, the expertise required for nuclear expansion is available and government planning is decisive in efforts to enlarge the deployment of nuclear power capacity. It is discussed whether the contribution of nuclear energy to domestic power generation ought to be increased in these countries, and how the share of national nuclear power production is likely to develop. Some recommendations are given as to how potential nuclear development could best be pursued. Below, section 2 puts nuclear energy into a perspective of sustainability. Section 3 describes some elements relevant for the economics of nuclear energy, the issue of reprocessing in particular. Section 4 analyses the potential role of nuclear power in the developing world, and goes into slightly more detail regarding two cases: China and India. Section 5 concludes. 2. NUCLEAR ENERGY AND SUSTAINABILITY Only recently has nuclear energy been subjected to detailed studies in terms of its potential contribution to establishing sustainable development (see, for example, NEA, 2000 and Rogner, 2001). Most analysts confirm that nuclear energy does not, at present, meet some essential requirements for constituting a sustainable energy resource (e.g. Bruggink and van der Zwaan, 2002), and that, in particular, the current use of LWR technology cannot be qualified as sustainable (Rothwell and van der Zwaan, 2003). Arguments concerning radioactive waste, reactor accidents, nuclear proliferation and terrorism, and economic competitiveness all play a role in the discussion regarding the sustainability of nuclear energy. Likewise, however, it has been pointed out that it is hard to claim that any of the present so-called “renewables” meet all the criteria of a sustainable energy resource (Bruggink and van der Zwaan, 2002). One of the major reasons is that renewables have, so far, not been applied on a large (global) scale, so that the risks involved with their usage cannot yet be apparent. Fundamental issues determining the (un)sustainability of renewables relate to land usage, materials use, waste production and environmental impact. While today, although it is not a sustainable energy resource, nuclear energy along with other presently available energy options - could play a transitional role towards establishing sustainable energy systems. Whereas changes in energy infrastructures, including nuclear ones in particular, occur generally relatively slowly, nuclear energy should still be viewed in a dynamic way. During a transitional phase with some role for nuclear power, some of the more problematic aspects of nuclear energy might be rendered significantly more sustainable. Recent technological developments in the nuclear field have been considerable, and are likely to continue, e.g. with respect to increasing reactor safety or building more proliferation-resistant reactors. This could give nuclear energy a potential role beyond the aforementioned transition period. To some extent, depending on perspectives of both time and
328 location, nuclear energy could therefore contribute to establishing paths towards sustainable energy systems and thereby to achieving sustainable development. An important reason for developing a domestic nuclear energy capacity in the past was its potential to greatly enhance national energy independence, mainly since nuclear fuel (uranium) is widely available, inexpensively acquirable and easily storable. Arguments of energy supply security will continue to motivate countries, including those in the developing world with currently modest or absent shares of nuclear energy to electricity production, to develop domestic nuclear power facilities. Since the subject of climate change mitigation has been recognised as one of the largest present global challenges, nuclear energy has received renewed consideration. Whereas even a massive expansion of nuclear energy worldwide would not be a panacea for the problem of global warming, its potential share in controlling atmospheric temperature increases could be significant (Sailor et al., 2000, and van der Zwaan, 2002). Given the size of the global change problem, nuclear energy might indeed deserve increased attention. Nuclear power might need to be expanded on a global scale, or at least should not be left out of the current energy mix, and developing countries could in principle play a role in a continued, perhaps intensified, employment of nuclear fission. 3. NUCLEAR ECONOMICS Even more so for developing countries than for countries in the industrialised world, the costs and capital intensity of electricity generation alternatives are essential determinants for energy policy decisions. Whereas nuclear energy has proved to be capable of competing with other (fossil) alternatives, it has never done so in a convincing way. In the current context of liberalising electricity markets, the capital intensity of nuclear power constitutes an increasing economic disadvantage. Also the intrinsic uncertainties and liabilities of nuclear power generation (related to e.g. radioactive waste and reactor safety) render nuclear energy economically unattractive. On the other hand, once nuclear power plants are fully depreciated - typically after some 30 years of operation - their low fuel costs imply that reactors become competitive on a marginal cost basis, even in a deregulated environment. Another aspect in favour of nuclear energy is that its negative environmental externalities have been more extensively included in electricity costs than in the case of its fossil-based counterparts. Like most renewable energy systems, and unlike fossil fuel energy systems, the external costs of nuclear energy are small (Rabl, 2001). The proper intemalisation of negative externalities for all energy resources would reinforce the competitiveness of nuclear energy. Since the earliest days of the nuclear era, an unfavourable aspect of nuclear power has been the prevention of nuclear (military) proliferation, and the costs related thereto, e.g. required for maintaining international institutions such as the IAEA designed to warrant the civil use only of nuclear energy. The corresponding costs are not accounted for in electricity prices or external costs. Since the 9/11 attacks on New York and Washington DC, public and political fear has been expressed regarding the use by terrorists of nuclear fission or radiological devices. Among potential radiological threats are those involving material or facilities related to the civil nuclear power industry. Terrorist risks regarding nuclear power plants and spent fuel-cooling ponds may be considered especially high (Alvarez, 2003, and van der Zwaan, 2003), and costs to enhance security against terrorist attacks should be taken into account in - and are unfavourable for - the economics of nuclear energy.
329 If a country decides to develop a civil nuclear power programme, it needs to make a careful cost-benefit analysis of the various options available, especially in regards to which nuclear fuel cycle to adopt, that is, an open or closed one. There is general agreement that with today’s low uranium and enrichment prices, the reprocessing and recycling option (allowing to close the nuclear fuel cycle) is more expensive than the alternative of direct disposal of spent fuel (implying an open fuel cycle). Arguments exist, however, over the magnitude of the difference, and how long this difference is likely to hold. Advocates of reprocessing often argue that the extra cost of reprocessing is small today, and might soon disappear, as uranium supplies become scarce and their price rises. In some of the most recent studies, by contrast, it is demonstrated that the margin between the cost of the closed fuel cycle vs. that of the direct disposal option is wide, and is likely to persist for many decades to come, if not longer (Bunn et al., 2003). For example, with central estimates for key fuel cycle parameters, reprocessing and recycling plutonium in existing LWR’s will be more expensive than direct disposal of spent fuel until the uranium price reaches over $360kgU. This price is not likely to be seen for many decades, as current uranium prices are about an order of magnitude smaller (typically some $40/kgU). With the reprocessing and recycling of plutonium, electricity costs would be increased by some 1.3 millskWh, compared to a total back-end cost for direct disposal of about 1.5 millskWh. With central estimates for key fuel cycle parameters, reprocessing and recycling plutonium in FBRs (involving an additional capital cost, compared to new LWR’s, of 200 $kWe) will not be economically competitive with a once-through cycle in LWR’s until the price of uranium reaches some 340 $/kgU. Electricity from a plutonium-recycling FBR would cost over 7 millskWh more than electricity produced with a once-through LWR. The economics of reprocessing are an increasingly important issue, since some countries, in both the industrialised and developing world, face major decisions about the future management of their spent fuel. Especially for developing countries like China and India, the high costs involved with reprocessing and recycling must be given careful consideration, as it is too soon to take important decisions regarding the possible construction of large commercial reprocessing facilities in these countries.
4.NUCLEAR POWER IN THE DEVELOPING WORLD Today, only eight developing countries (excluding those with economies in transition in Central and Eastern Europe and the former the Soviet Union) possess nuclear power, or are in the process of building one or more nuclear reactors: Argentina, Brazil, China (including Taiwan), India, Iran, North Korea, Pakistan and South Africa (IAEA, 2002). Of these, five are in Asia (Iran is counted among the developing Asian countries). Almost the entire current expansion of nuclear power capacity in developing countries today takes place in Asia, with Argentina being the only non-Asian developing country that has currently a reactor under construction, and Pakistan being the only Asian developing country that has, at present, no power plant under construction (the other four Asian developing countries with nuclear aspirations have at least one unit under construction). Even seen from a global perspective, nearly all nuclear power expansion is currently taking place in Asia. The two countries the most actively pursuing an expansion of their nuclear power capacity today, i.e. that have at present the largest numbers of reactors under construction - on a worldwide basis - are: China, with 6 units under construction, totalling a net capacity of almost 5 GWe, and India, with 8
330 units under construction, totalling a net capacity of more than 2.5 GWe ( M A , 2002). After having tested a range of different reactor types, largely of Canadian, French, Russian and domestic design, China is now moving towards standardisation and selfreliance in design, manufacturing, construction and operation. India has occupied a leading place for some time among Asian nations in the indigenous design, development, construction and operation of nuclear power reactors. Because of their leading roles in the nuclear field in developing Asia, their ambitious nuclear expansion plans, and their future potential to become major exporting countries of nuclear technologies, the decisions of China and India regarding nuclear energy will be determining for its development in Asia and may have a sizeable effect on its evolution world-wide. 4.1 China Today, China has five large nuclear reactors in operation, representing a net capacity of close to 4 GWe. By the second half of this decade, the six reactors now under construction should all be in operation too, so that the total available net nuclear power capacity will then amount to some 8.5 GWe. China’s official plans for the further expansion of its nuclear capacity are ambitious, and reach as high as an installed 20 GWe by 2010 and 40 GWe by 2020. It is likely, however, that these official goals will not be met. Still, China may well have installed a nuclear power capacity of about 20 GWe by the year 2020. Nuclear energy’s contribution to electricity generation in China is at present a little over 1%, and will most likely remain below about 3% until at least the year 2020. In order to realise and support the long-term expansion of its nuclear power programme, China plans to reprocess the spent nuclear fuel it produces, and to recycle the resulting plutonium in MOX fuel for both LWR’s and FBR’s. China already possesses a small, operational, civilian pilot reprocessing plant with a capacity of 50 tons of spent fuel per year, and has started the construction of an experimental fast reactor with a capacity of 25 MWe. Decisions are pending as to whether or not to build a large commercial reprocessing plant with an annual capacity of 800 tons of spent fuel, as well as a 300 MWe breeder reactor. A major reason that China wants to operate a closed nuclear fuel cycle, rather than an open cycle in which spent fuel once discharged from the reactor is considered as waste and stored as such, is energy security. Indeed, under its present nuclear programme, based on a once-through fuel cycle, the currently proven domestic uranium reserves would probably be used up within a few decades (Zhang, 2001). 4.2 India Unlike China, India possesses at present a rather large number of (small) reactors. Today, it has 14 nuclear reactors in operation (that involve a cumulated capacity that is lower than the 5 reactors in China) representing a net capacity of around 2.5 GWe. By the second half of this decade, the 8 reactors now under construction should all be in operation, so that the total available net nuclear power capacity will then amount to over 5 GWe. Indian central government’s official plans for further expansion of its national nuclear power capacity are ambitious. It is realistic to assume that India will have installed a nuclear capacity of some 15 GWe by the year 2020. The current contribution of nuclear energy to electricity generation in India is close to 4%. With the domestic nuclear capacity increase as projected, this contribution could amount to some 8% by the year 2020.
331 Like China, India has chosen the closed fuel cycle for realising the long-term expansion of its nuclear power programme, and thus plans to reprocess the spent fuel generated by its nuclear power plants. The Indian nuclear power programme is based on a three-stage plan, to eventually make use of India’s abundant domestic resources of thorium through the use of FBRs. In this plan, the first stage involves the construction of mainly PHWR’s for electricity generation with the production of plutonium as a by-product. In the second stage, FBR’s are built to be fuelled by this plutonium and depleted uranium, to produce uranium-233 in their thorium-loaded blankets. In the third stage, FBRs ar fuelled with thorium and the uranium-233 initially produced from the second st&. A small 14 MWt test FBR has been successfully operated for over a decade now, while a detailed design of a 500 MWe prototype FBR has been completed and a ‘construction site approved. If current construction plans are realised, India’s first large FBR could be commissioned by the end of this decade. As for China, a major reason for India to choose the closed nuclear fuel cycle is energy security. Indeed, under India’s first-stage (largely PHWR) nuclear power programme the domestic uranium reserves are expected to represent some 400 GWe-yrs worth of electricity, which would be consumed within a few decades under current nuclear power expansion plans (Gopalakrishnan,2002). 4.3 Is reprocessing the right way to uroceed for Asia? A recent MIT study concludes that, over at least the next 50 years, the best choice to meet nuclear energy’s challenges is the open, once-through fuel cycle (MIT, 2003). It judges that there are adequate uranium resources available at reasonable cost to support this choice under a global growth scenario of nuclear power. China and India have arrived at a critical point and must soon take a decision about whether or not large-scale reprocessing and breeder programmes are to be developed. For both countries, energy security is the main argument for not opting for the once-through nuclear fuel cycle that obviates the need to reprocess and recycle spent nuclear fuel. Below, it is argued that energy security in China and India constitutes insufficient grounds for the elevated costs that the establishment of a closed nuclear fuel cycle would entail. This statement can be backed by at least six arguments. First, advocates of a large nuclear energy programme, including a plutonium economy, claim that this would reduce the national dependence on foreign energy resources such as oil. This is partly true, of course, since through nuclear energy the dependency on oil for electricity production decreases. It is true, however, to only a limited extent, since nuclear energy and oil are today also largely complementary energy resources, rather than substitutes for each other. Oil’s largest application is currently in the transport field, for which nuclear energy, e.g. though the production of hydrogen, is still mostly unsuitable. Second, in the two scenarios depicted above for China and India, respectively 3% and 8% are possible (and probable) electricity shares for nuclear energy in 2020 (much higher shares by that time are deemed unlikely). These shares correspond to a share in national energy demand (rather than that of electricity) of a few percent at most. In view of the larger scheme of energy security, these numbers are too small to make any significant difference in the energy dependence of either of these countries, and thus it matters little how the projected nuclear energy is produced, through a reprocessing cycle or not. Third, if China’s and India’s energy infrastructures become more dependent on nuclear energy beyond the forthcoming couple of decades, while domestic fissile resources are rapidly used up, energy security could become an issue of concern.
332 However, with the existence of large global reserves of uranium, the possibility to enlarge these by factors when exploration at higher costs is allowed, the likely presence of large yet undiscovered uranium resources, and the vast amounts of uranium available in oceans presumably recoverable at competitive prices, a depletion of global uranium reserves is unlikely for a long time to come. With the existence of well-established commercial world markets for uranium, the supply of nuclear fuel does not easily seem in danger for a long time to come. Fourth, several countries, among which South Korea, have shown that a large nuclear power programme can be easily realised, operated and maintained without being in possession of large domestic uranium resources, extensive local enrichment facilities and national nuclear fuel production. The reason is that the existing global market for uranium products is hardly subject to fluctuations in supply or price, and has, so far, not displayed the volatility that industries relying on the supply of petrol experience in global oil markets. Fifth, uranium suppliers in the world are diverse, both geographically and politically, and are unlikely - quite contrary to common practice in the global oil market - to collude to raise prices dramatically or limit supplies substantially. Also in this respect the differences are large in comparison with the behaviour of global oil markets, where even private consumers, dependent on petrol for transportation, regularly experience the fluctuations occumng in crude oil prices. Sixth, even if domestic uranium supply security were to become a matter of concern at some point in the future, a strategic reserve of uranium fuel could easily be realised - surely more easily than in the case of strategic oil reserves - since uranium is inexpensive to buy, simple to handle and easy to store. Furthermore, the separation of plutonium increases the risk of theft by states or non-state or sub-national entities wishing to acquire nuclear weapons, as well as by terrorist groups attempting to develop nuclear fission devices. This implies an increase of costs and enhanced burden of safeguards and physical protection. The policies of China and India regarding reprocessing could importantly influence the attitudes of the international community and the posture of powerful nuclear and economic actors in particular. Indeed, the civil use of plutonium in these countries could serve as an encouragement or excuse for its use by other nations, especially when these have an interest in using plutonium for military purposes. If China and India were to decide not to develop civilian reprocessing, a good example could be set for other countries in the region that contemplate the reprocessing and recycling of plutonium.
5. CONCLUSIONS One of the main challenges facing mankind during the 2lStcentury will be to ensure adequate, affordable and reliable energy services in a sustainable manner. Energy is critical for social and economic development. With energy use in many developing economies in Asia still at a low level, with about 60% of the world’s 2 billion people without access to modern energy services living in this part of the world, and with the global increase in energy demand concurrent with expected economic growth for a large part taking place in this region, it is important that the right (sustainable) energy choices are made (Saha, 2003). In developing countries the supply of energy should not become a constraint to their economic growth. Security of that supply does not seem to be a major issue for at least decades to come, if not longer, either in developing countries or elsewhere in the world. Rather, the question
333 is whether we can afford the current patterns of energy production and consumption to continue to rapidly deteriorate the health of our common environment. Changing the current unsustainable patterns of energy use is one of the main challenges for both developed and developing countries. In attempting to establish sustainable energy systems world-wide, it is important not to simply impose the model in use in developed nations on countries in the developing world, given the latter’s distinct past societal and economic evolution and different current social and cultural characteristics. Nuclear energy is an option that probably should (and will) be pursued by China and India, as well as other Asian countries, and perhaps by the developing world at large. Nuclear energy can complement other options on the path to obtaining clean and affordable energy production and consumption. It is important, however, that the right choices are made in the development of nuclear energy, if it is decided to include this option in national energy programmes. China and India, or other developing countries for that matter, do not need to pursue a reprocessing programme in the foreseeable future, based on the arguments made above. It is much cheaper to choose the direct disposal of spent fuel option, rather than to opt for the reprocessing and recycling alternative. Lots of savings can be realised when the direct disposal option is chosen. The resulting financial means, especially badly needed in developing countries, can be usefully employed in other domains, for example in the development of other clean energy options. The decision to eventually adopt a nuclear reprocessing economy or not should therefore be postponed, for at least for the next couple of decades, but probably longer. In the meantime, and while new and more advanced nuclear technologies may emerge, the interim storage of spent nuclear fuel from a once-through cycle can be effectively and safely employed. Since the expert discussion about reprocessing is likely to continue for years, countries like China and India could, for the time being, better employ the once-through nuclear cycle, until the reprocessing problem has been resolved and more technological clarity has been achieved as to which nuclear technology should be employed in the future. REFERENCES 1.
Alvarez, R., J. Beyea, K. Janberg, J. Kang, E. Lyman, A. Macfarlane, G. Thompson, F. von Hippel, 2003, Science and Global Securiv, 11, 1-5 1.
2.
Bruggink, J. J. C. and B. C. C. van der Zwaan, 2002, “The role of nuclear energy in establishing sustainable energy paths”, International Journal of Global Energy Issues 18,21314.
3.
Bum, M., S. Fetter, J.P. Holdren and B.C.C. van der Zwaan, 2003, The economics of reprocessing vs. direct disposal of spent nuclear fuel, Project on Managing the Atom, BCSIA, John F. Kennedy School of Government, Harvard University, forthcoming, Fall 2003.
4.
Gopalakrishnan, A., 2002, “Evolution of the Indian nuclear power program”, Annual Review of Energy and the Environment, 27, 369-95.
5.
IAEA, 2002, “Nuclear power status around the world”, IAEA Bulletin, 44, 2 (December 2002), Vienna, Austria.
334 6.
MIT, 2003, Thefuture of nuclear power - an interdisciplinary MIT study, MIT, Cambridge MA, USA.
7.
NEA, 2000, Nuclear Energy in a Sustainable Development Perspective, Nuclear Energy Agency, OECD, Paris.
8.
Rabl, A., 2001, “The importance of external costs for the competitiveness of renewable energies”, International Journal of Global Energy Issues, 15, 112.
9.
Rogner, H. H., 2001, “Nuclear power and sustainable energy development” Journal of Energy and Development 26,2,235-258.
10. Rothwell, G. and B.C.C. van der Zwaan, 2003, “Are light water reactor systems sustainable?’, Journal of Energy and Development, forthcoming. 11. Saha, P.C., 2003, “Sustainable energy development: a challenge for Asia and the Pacific region in the 21” century”, Energy Policy, 31, 1051-1059 12. Sailor, W.C., D. Bodansky, C. Braun, S. Fetter and B.C.C. van der Zwaan, 2000, “A Nuclear Solution to Climate Change?’, Science, 288, 19 May 2000, 1177-1178. 13. van der Zwaan, B.C.C., 2002, “Nuclear Energy: Tenfold Expansion or Phaseout?’, Technological Forecasting and Social Change, 69,287-307. 14. van der Zwaan, B.C.C., 2003, “Nuclear Materials and the Threat of Radiological Weapons: The Vulnerability to Terrorist Attacks of Nuclear Power Plants and Spent Fuel Cooling Ponds”, Paper proffered to the 53rd Pugwash Conference, Halifax, Nova Scotia, Canada, 17-21 July 2003.
15. Zhang, H., 2001, “Economic aspects of civilian reprocessing in China”, Proceedings of the 42ndAnnual Meeting of the Institute for Nuclear Materials Management, Indian Wells (July 15-19,2001).
APPLICATIONS OF BIOTECHNOLOGY TO MITIGATION OF GREENHOUSE WARMING: Report on the St. Michaels Workshop, April 13-15,2003
NORMAN J. ROSENBERG AND R. CESAR IZAURRALDE Joint Global Change Research Institute, Pacific Northwest National Laboratory and the University of Maryland, College Park, USA
F. BLAINE METTING Pacific Northwest National Laboratory, Richland, USA As the result of the combustion of fossil fuels and land use change, atmospheric greenhouse gas (GHG) concentrations are increasing: COz has increased from about 280 vppm (parts per million by volume) in around 1880 to over 370 vppm in 2002 (IPCC) Nitrous oxide rose from 270 vppb (parts per billion by volume) in the 1gth century to 3 14 vppb in 1998. Similarly, for the same period CHq concentrations increased from 750 to 1745 vppb (IPCC, 2001). Scientists largely agree that climatic change will occur if GHG emissions continue at current rates, even more so at rates of population growth and increasing affluence predicted to occur in this century. There is considerable evidence (summarized by Parmesan and Yohe, 2003) that the signal of such change is already being detected in climatological and biological phenomena. Carbon dioxide was the GHG of primary concern at the second St. Michaels workshop, devoted to study of Applications of Biotechnology to Mitigation of Greenhouse Warming (papers presented, authors and commentators are listed in Appendix I). On average, 23 billion tonnes' of that gas (containing about 6.3 billion tonnes of carbon) were emitted annually to the atmosphere during the last decade. The primary source is combustion of fossil fuels-essentially the oxidation of carbon stored in coal, petroleum and natural gas that has accumulated in geologic depositories over eons. Land use change, primarily deforestation in the tropics, is a secondary source and contributes between 0.6 and 1.0 billion tons of C per year (Houghton and Hackler, 2002). However, the amount of C accumulating in the atmosphere is 3.2 billion tons per annum. The net C flux from the atmosphere to the ocean is about 1.7 billion tons per annum. By difference, it is estimated that between 1.3 and 3.1 billion tons of C are captured annually in the terrestrial biosphere. This residual sink has been attributed mostly to the regrowth of northern hemisphere forests and to an overall global increase in photosynthesis stimulated by the so-called 'COz-fertilization effect.' Carbon emissions to the atmosphere are projected to increase even more in this century. This trend can be altered in a number of ways: (1) the demand for fossil fuels can be reduced through improvements in energy end-use efficiency and by increased use of non carbon-emitting wind, solar, or nuclear energy systems, (2) carbon can be captured at the smokestack or tailpipe and sequestered in geologic strata or the ocean, (3) carbon can be withdrawn from the atmosphere by photosynthesis and sequestered in standing biomass, in wood products, and in soils, or (4) carbon can be fixed in crops grown for biomass subsequently used to produce fuels, plastics and other products. When fuels
' One billion tonnes= 1 petagram (Pg) = 1 gigaton (Gt).
335
336 derived from biomass are burned, the carbon they contain is recycled to the atmosphere with no net increase in the atmosphere’s carbon burden. ‘St. Michaels 11’ was devoted to the analysis of the opportunities for biotechnology to contribute to GHG mitigation. What do we need to know to make progress on these options? Can biotechnology contribute to mitigation of climate change? What environmental and societal issues might be associated with their use? Can the yields and quality of biomass crops be enhanced by genetic engineering? If biomass is to be produced in quantities that make a difference, large areas of land must be dedicated to this end. Competition for land with traditional crops, forest and range will inevitably result. Can yields of traditional agricultural crops, forests and range be increased sufficiently by technological advances, including genetic engineering, to offset the loss of land to biomass production? It may be an exaggeration to suggest that a bale of switchgrass has about the same energy content as a lump of coal. But the point is made that great volumes of biomass will have to be transported to power plants or refineries and that transportation costs may limit its use as a substitute for fossil fuels. Might it be possible to concentrate the energy content of biomass on or near the farm so that a much smaller volume of product need be shipped to the power plant or refinery? Biomass can be used alone as a boiler fuel or it can be co-fired with other fuels which, today, is economically preferable. Biomass can also be used as a feedstock for the production of ethanol which can be used, as it is today, as an additive to gasoline or it can be used as a total substitute for gasoline. Ethanol is already produced from starch extracted from grain-in the U.S. mostly from corn. But advances in enzymatic processes now make it possible to produce ethanol from the cellulosic fractions of the biomass-straw from small grains and corn. Microbial and enzyme biotechnology also holds promise for production of H2 and methanol from biomass. Will it be possible to produce enough liquid fuel from biomass to significantly reduce the global demand for fossil fuels? How will biotechnology contribute to improving or altering the conversion process? When all energy inputs required in the production of power, products and fuels from lignocellulosic materials are accounted for, will the product represent a net loss or gain in energy? And will ethanol for transportation be economically competitive? These are among the questions posed to the authors, commentators and participants at the workshop regarding biorefmeries of the future. An earlier workshop, held in St. Michaels, MD in December of 1998, was devoted to the exploration of opportunities for carbon sequestration in soils. Among the possibilities raised by Metting et al. (1999) was the application of biotechnology to facilitate carbon sequestration in soils or to develop new microorganisms for that purpose. A deeper examination of this notion seemed appropriate for the purposes of St. Michaels 11. The subject was expanded to consider, as well, how plants might be modified to deposit organic materials more readily converted to long-lasting forms of soil organic matter. The paper on microbial biotechnology described a suite of possibilities ranging from direct energy production to carbon capture and enhanced efficiency across industrial sectors. These include applications that could have a large impact over the coming decades, including microbial H2 production or fixation of COz and nitrogen. Also discussed were enhanced efficiency applications ranging from fossil fuel and industrial
337 biotechnology to waste treatment that could collectively impact GHG abatement. Questions centered on the relative potential of these various approaches while making the point that they will be pervasive across most sectors of the industrial, agricultural and biomass economies. The final paper presented at the workshop addressed three issues: implications for developing countries of biotechnology, per se, and of its applications to mitigation of greenhouse warming, public perceptions of biotechnology, and ethical considerations related to use and environmental consequences of biotechnology. Proceedings of the workshop will be published by Battelle Press in the fall of this year. What follows here is a preliminary set of key findings and insights drawn from the white papers, commentaries and discussions held at the St. Michaels I1 Workshop. These are organized according to the main topics of the workshop.
KEY WORKSHOP FINDINGS: Biomass Economic modeling shows a potentially important role for biomass to substitute in part for the fossil fuels responsible for most of the emission of CO2 to the atmosphere. Biomass can be used as a boiler fuel andor as a substrate for the production of liquid transportation fuels and plastics. The genomes of the woody and herbaceous species Populus and Panicum virgatum (poplar and switchgrass) are being sequenced with the objective of accelerating their domestication to impart traits that can increase yields and alter their quality for use as boiler fuels or as substrates for conversion to liquid fuels such as ethanol. The ideal poplar would be relatively short with a large stem diameter and a crown geometry allowing efficient use of space in the planted forest. The tree would be sterile, conserving energy for vegetative growth and eliminating the threat of geneflow to native stands. The ideal switchgrass would produce high dry matter yields &om large numbers of tall, thick tillers. The plant would be sterile in order to increase allocation of energy to cell wall production and to prevent spread of modified pollen or seed to native populations. In both poplar and switchgrass cell wall composition might be altered-higher polysaccharides and lower lignin for biochemical conversions to liquid fuels and the opposite for biomass used in direct combustion or soil carbon sequestration. Candidate genes controlling the traits described above have been mapped in Arabidopsis, tobacco, maize, tomato, rice, wheat and pea. These genes impart such traits as control of lateral branching, apical dominance, height growth, dormancy, stem thickness, branching angle and leaf shape. Prospects for woody and herbaceous biomass will depend upon (1) a reliable supply of product at competitive prices, (2) proof that the environmental consequences of biomass monocultures are not detrimental and (3) increased productivity of food, feed and fiber crops to make up for production lost from land diverted to biomass. There appear to be no insurmountable biological obstacles to increasing yields of biomass crops-by genetic engineering, traditional plant breeding and improved management practices--if public sector research remains committed andor if
338 private sector research is engaged by evidence that a long-term, large scale market will develop for biomass-based energy. Major contributions to increased biomass crop productivity will follow in the near term from changes in plant morphology or ‘architecture’. Genetic Modification (GM) techniques for establishing herbicide resistance, insect and disease resistance in agricultural crops can be readily applied to biomass crops as well. In the longer term, it may also be possible, by GM and other breeding techniques, to enhance photosynthetic efficiency in biomass crops. Environmental consequences of genetically engineered biomass crops will likely be similar to those of agricultural crops. Herbicide tolerance (HT) and BuciZZus thuringiensis (Bt) reduce pesticide use in agricultural crops and thus reduce the costs of production and the exposure of humans and wildlife to toxic chemicals. Weeds that survive herbicide applications in HT fields could convey their resistance to subsequent generations of ‘super-weeds’. Target insects exposed to Bt engineered crops could develop resistance and convey it to future generations. Biodiversity could be lessened as populations of both targeted and untargeted insects decrease and others increase in numbers. Biodiversity is (by definition) diminished by monoculture of any kind and will be no less so in biomass than in traditional fields. Transgenes from GM plants might be transmitted to ‘by-stander’ plants and incorporated into their genomes. The most serious risk, exportation of transgenes in the pollen or seeds of genetically engineered biomass cultivars to related species or non-GM cultivars would be alleviated were there assurance that Populus, Punicum or other likely biomass crops can be made totally infertile and/or can be bred so that they are unable to survive outside of their appointed environments. The former mechanism seems more likely to be achieved than the latter. A ‘carbon tax’ on fossil fuels or direct subsidies to producers may be required to make biomass competitive in the energy market.
soils Soils play a fundamental role in the biogeochemical cycling of carbon and nitrogen as well as in the production or consumption of greenhouse gases such as C02, N20 and C h . At 60 Pg C y-’ soil respiration releases 10 times more C to the atmosphere annually than fossil fuel combustion does. Soils can be managed to behave as sources of or sinks for atmospheric C02 through the conversion of forests and grasslands to agriculture or the application of improved agricultural practices leading to soil carbon sequestration. Soil carbon dynamics are determined by biological, chemical and physical controls such as net primary productivity, soil microbial activity, substrate quality, and soil structure. Rhizosphere processes affect soil carbon sequestration. Rhizodeposition stimulates fungal and bacterial growth while mycorrhizae contribute to carbon translocation throughout the soil matrix. Genetically modified biomass has the potential to alter biogeochemical cycles. It may be possible to manipulate the microbial community to affect carbon and
339 nutrient cycles. But GM crop residues could also have negative effects on bacterial communities. Soil carbon sequestration could be promoted by manipulation of the soil microbial biomass either through biostimulation or bioaugmentation. But there are concerns regarding the introduction of GM microorganisms into soil, the importance of microbial diversity, and the potential gene exchange from GM organisms to other members of the soil community. At least for the next few decades, indirect manipulation of the soil microbial community by the use of GM crops appears to offer greater opportunity for enhancing soil C sequestration than does the introduction to soil of genetically engineered microbes. Biorefineries The vision is development of a technical and commercial infrastructure analogous to the current oil refinery. In the biorefinery, renewable biomass would be “cracked” to u s e l l components for integrated physico-chemical and bioconversion to gaseous or liquid fuels, power (direct combustion) and chemical and food products. A very large and constant biomass supply is required to sustain the biorefinery concept. Biorefineries would use high value plant materials, such as grains are used today, as well as agricultural, forest and municipal wastes and dedicated biomass crops. Biomass from various sources has roughly the same major components. Of these, lignocellulosic materials and their efficient conversion to platform chemicals or fuels is a major technological challenge that biotechnology can address. The history of the corn wet-milling industry may be a good example of how the biorefinery of the future might evolve. Initially, starch was the only product but this has changed so that today corn is used to produce starch, ethanol, organic acids, sweeteners and other products. Biorefineries will become more complex and efficient over time, beginning with the production of higher value products and evolving to co-production of lower value fuels and power. The microbial world offers a significant and broad-based set of bioconversion possibilities to be exploited by the biorefinery of the lture. Filamentous fungi are an excellent example of a largely untapped resource. On-site (e.g. on-farm) pre-conversion of biomass to enable efficient transport is a possibility. In any case, innovative concepts for handling and moving materials to the biorefinery will have to be developed. Microbial BiotechnologTL The microbial world (i.e., archaea, bacteria, fungi, microalgae) represents well over 99% of the genetic and biochemical diversity of life on Earth. It is believed that less than 1% of all existing microorganisms have been isolated and cultivated under controlled laboratory conditions. Many microorganisms live under environmentally extreme conditions that cannot support plant or animal life. Examples are growth below the freezing point of water and at temperatures approaching 120” C, growth under pressures such as at the
340
a
a
a
m
a
a
a
a
a
bottom of the ocean, at pHs near 0 and 14, in the deep terrestrial subsurface (complete absence of 0 2 and sunlight), in rocks in cold and hot deserts and in the saltiest bodies of water. This means that microorganisms have evolved to be able to extract energy and C for growth and reproduction from nearly every habitat yet explored. Modem genetic, genomic and biotechnology tools will enable the exploitation of the rich diversity inherent in the microbial world. Microbial and enzyme biotechnology will become pervasive in the coming decades and will impact nearly every sector of the industrial and energy economy. Microbial biotechnologies will vary greatly; some will conceivably have a very large impact on GHG mitigation. These include microbial C02 and N2 fixation and H2 production. Microalgae and photosynthetic bacteria can capture and fix C02 directly from industrial flue gases or the atmosphere. Coupled C02 and N2 fixation from treatment of wastes and wastewaters could significantly impact global fertilizer production and use with concomitant reduction in energy use. There are six major metabolic routes to microbial H2 production, including direct and indirect coupling of H2 evolution to photosynthesis, and dark fermentative production from fossil fuels or biomass. Microbial biotechnology will significantly impact the fossil fuel and chemical industries. Although their impact on GHG abatement during the first decades of this century could be small, these applications of biotechnology provide the knowledge and experience base on which direct applications of microbes and enzymes to energy production and GHG mitigation will be built. Developing Countries The economies of most developing countries tend to be heavily reliant on agriculture, especially in the tropics. Global warming is expected to affect crop production more seriously in the tropics than in other climatic zones because many important crops are grown now at or near their upper temperature limits and because these countries often lack the scientific and management capabilities needed for adaptation to change. Fuel is often in short supply in rural areas of these countries. Manures and other farm wastes are currently used for fuel, but biomass crops can be used more effectively to satisfy the energy needs of farmers and rural populations. Modem biomass crops, while contributing to a reduction in fossil fuel use, also offer a new source of income generation for both large and small farmers. Genetic engineering can increase the productivity of food, feed and fiber crops in the tropics as it now does in the temperate zones. The productivity and quality of biomass crops can also be improved by genetic engineering. Except for insect resistant cotton, adoption of GM crops has been very limited in most developing countries. Adoption is constrained by the lack of trained personnel and research infrastructure to support an efficient and timely regulatory approval process. In addition, consumer opposition, in Europe and elsewhere, to the importation of GM crops engenders fear in some developing countries that their
341
agricultural products will be barred from ‘first world’ markets. Fear that seed supplies, technology and knowledge will be controlled by large multi-national corporations, leaving developing countries at their mercy, contributes to reluctance to adopt GM crop varieties. Removal of these constraints requires that developing country scientists be adequately trained to conduct field trials with GM crops and that regulators be adequately informed of the state-of-the-science and regulatory experience in other countries. The benefits of new agricultural technologies applied in developing countries must be equitably distributed. Knowledge gained from the ‘green revolution’ experience indicates that the new technology must be ‘scale-neutral’ in the sense that it can be profitably applied to farms of all sizes and that all farmers must have access to credit, farm inputs and markets and receive essentially the same prices for their products. Secure land tenancy or ownership rights and policies that do not discriminate against small farmers and landless laborers are also essential.
0
0
Public PerceDtions of Biotechno1og.y and Ethical Issues Public attitudes as influenced by the mass media and ethical concerns will determine acceptability of biotechnology, in and of itself and as a tool for combating global climatic change. Issues perceived as ‘ethical’ strongly condition public perceptions and attitudes on this subject. The breadth of definitions applied to the term ‘biotechnology’ causes confusion in the public. Most non-specialists today equate biotechnology with genetic engineering and cloning. There already exists a social movement seriously opposed to agricultural applications of biotechnology. Ethical issues underlying this opposition are related to religious, environmental, health and control-of-technology concerns. The prospect that biotechnology will shift power relations in the global food system is also of ethical concern. The possibility that biotechnologies can aid developing country agriculture provides an ethical argument in its favor. The possibility that these technologies can be applied to mitigation of global climate change offers another ethical argument in its favor. The general public lacks the ‘experiential knowledge’ that qualifies it to judge the merits of biotechnology. Neither does ‘public wisdom’ (a society’s implicit knowledge and common sense) provide a basis for scientific judgment of the issue. But in many societies public wisdom does provide a strong emotional basis for the notion that manipulation of genes is a perversion of ‘natural order’. The mass media fill the informational void created by lack of direct experiential knowledge and public wisdom on the matter of biotechnology and its applications. But, depending on the type of media outlet, competence of its reporting staff and its agenda, the media contribute as much to confusion as to enlightenment on the issues of biotechnology. Analyses show that the public is concerned for the environment, espouses varying levels of acceptance toward applications of biotechnology, and often carries conflicting views where these issues overlap.
342 The potential for using biotechnology to alleviate hunger and poverty and to mitigate climate change or to aid in adaptation to its effects provides a strong presumptive argument for doing so. But there are ethically-based challenges that might reverse this judgment. The following arguments apply: Deployment of biotechnology to mitigate climate change is not justified unless the expected benefits exceed the expected costs in terms of food safety risk, environmental impacts and economic losses due to structural adjustments in the farm sector. Uncertainty with regard to both costs and benefits associated with the deployment of biotechnology make consensus on the ethical justifiability of using biotechnology difficult to achieve. Ethical objections raised to biotechnology stem also from perceptions of risks when scientist “play God” and manipulate natural organisms in ‘unnatural’ ways. Ethical concerns also stem from the perception (or reality) that the public, the farmers and other stakeholders are not involved in formulating policy and decision making with regard to creation of transgenic plants and their deployment and that resources devoted to biotechnological approaches are drawn from a pool that could support improvements such as organic farming that are deemed to be more ecologically benign. The perceptual and ethical concerns enumerated above argue for the importance of a process for planning all stages of technology development in a manner that is open to engaging the broadest possible constituency. Because of its particular weaknesses and vulnerability to the impacts of climatic change in the developing world, the selection of research priorities for biotechnology should always be informed by socio-economic models that predict impacts on rural populations and resource poor farmers. ACKNOWLEDGMENTS We thank the authors, commentators, breakout group chairs and reporters and all others who participated in the workshop for their efforts and insights. Thanks, too, to Ms. Laura Green and Ms. Paulette Wright for their efficient discharge of all logistical duties associated with planning and conduct of the workshop. We thank the Office of Biological and Environmental Research, U.S. Department of Energy, and the Electric Power Research Institute for financial and intellectual support of the workshop. REFERENCES 1.
Houghton, R.A., and J.L. Hackler. 2002. Carbon Flux to the Atmosphere from Land-Use Changes. In Trends: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, US. Department of Energy, Oak Ridge, Tenn., U.S.A.
2.
IPCC. 2001. Climate change 2001: The scientific basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate
343 Change [Houghton, J.T., Y. Ding, D.J. Griggs, M. Noguer, P.J. van der Linden, X. Dai, K. Maskell, and C.A. Johnson (eds.)]. Cambridge Univ. Press, Cambridge, United Kingdom and New York, NY,USA. 881 pp. 3.
Metting, F.B., J.L. Smith and J.S. Amthor. 1999. Science needs and new technology for soil carbon sequestration. Chapter lin N.J. Rosenberg, R.C. Izaurralde and E.M. Malone, eds. Carbon Sequestration in Soils: Science, Monitoring and Beyond. Proceedings of the St. Michaels Workshop, December, 1998. Battelle Press, Columbus.
4.
Parmesan ,C. and G. Yohe. 2003, A globally coherent fingerprint of climate change impacts across natural systems. Nature 42 1:37-42.
344 APPENDIX I PAPERS PRESENTED AT ST. MICHAELS I1 Paper 1. Biotechnology and Climate Change. Jae Edmonds and John Clarke, Pacific Northwest National Laboratory, Joint Global Change Research Institute. Paper 2. Genomes to Life: A Revolution in Biological Science and the Opportunities it Affords. Michael Knotek, Consultant to DOE. Paper 3. Mitigation of Greenhouse Warming, Biomass-based Energy Supply Systems and Accelerated Domestication of Energy Crops. Gerald Tuskan, Stanley Wullschleger, Janet Cushman, Robin Graham, Oak Ridge National Laboratory and Stephen Thomas, National Renewable Energy Laboratory. Paper 4. A Role for Genetically Modtfied Organisms in Soil Carbon Sequestration. Charles W. Rice, Kansas State University and Scott Angle, University of Maryland. Paper 5. Bioconversion and Biore$neries of the Future. Linda Lasure, Pacific Northwest National Laboratory and Min Zhang, National Renewable Energy Laboratory. Paper 6. Microbial and Enzyme Biotechnology: 21st Centuly Opportunities for Greenhouse Gas Mitigation. F. Blaine Metting, Pacific Northwest National Laboratory, John Benemann, IEA Greenhouse Gas Program Microalgae Project, Elias Greenbaum, Oak Ridge National Laboratory, Michael Seibert, National Renewable Energy Laboratory, Alfred Spormann, Stanford University, Hideki Yukawa, Research Institute of Innovative Technology for the Earth, and John Houghton, U S . Department of Energy. Paper 7. The Socio-Economic Context. Paul Thompson, Purdue University, Joel Cohen, International Service for National Agricultural Research and Toby Ten Eyke, Michigan State University. Dr. Craig Venter, President of the Institute for Biological Energy Alternatives spoke on the topic of "Genomics: Potential Solution of Our Energy Issues" during lunch on Monday, April 14.
COMMENTATORS Paper 3: Pierre Crosson, Resources for the Future Maurice Ku, Washington State University Ian Noble, World Bank Norman Rosenberg, JGCRI Paper4: John Bennett, Saskatchewan R. Cesar Izaurralde, JGCRI Michael Miller, Argonne National Laboratory Andrew Ogram, University of Florida
345
0
0
Paper5: James Hettinhaus, Consultant Karl Sanford, Genencor Roy Doi, University of California, Davis Paper 6: Jason Rupp, Bio James Brainard, Los Alamos National Laboratory Roger Prince, Exxon Mobil Paper 7: Ralph Hardy, Cornell University Eric Lichtenberg, University of Maryland Clive Spash, University of Aberdeen Michael Taylor, Resources for the Future
This page intentionally left blank
13. PERMANENT MONITORING PANEL MEETINGS AND REPORTS
This page intentionally left blank
MOTHER AND CHILD PERMANENT MONITORING PANEL
activities 2002-2003 mathale charpak palkkk pediatricianicanb bogotia colombia
This PMP was created in 2002 with the specific mission of decreasing mortality and morbidity of both mother and infant (of less than one year) through an efficient and effective network with the International Scientific Community in general and the World Federation of Scientists in particular. ACTIVITIES 2002 - 2003 During this period, we worked on a very specific target: the Low Birth Weight Infant (LBWI). 18 millions of LBWI are born each year around the world, 90% in developing countries and 50% in the South East Asia region. The map of the distribution of LBWI can be superposed on the map of the poverty in the world. The Kangaroo Mother Care (KMC) method is a system of alternative care for these LBWI that allows a better and more rational use of the available human and technological resources in neonatology or pediatric units of developing countries without jeopardizing the survival and the quality of life of the LBWI. KMC was created in 1978 in Colombia by a pediatrician, but was applied in an empiric way that prevented diffusion of the method. The World Laboratory was the fust to acclaim, promote and support formal and rigorous scientific evaluations from 1989 to 1996; then to promote and support the worldwide diffusion of the KMC method up to now. Representatives of more than 25 countries came to Bogota for training in KMC in a spirit of South-South collaboration and the WHO published the KMC Guidelines 2 months ago. We can almost say that the ”kangaroo-sation” of the world is beginning. The importance of the first scientific evaluations and the following publications in international journals have been the cornerstone of this diffusion, because it was the only way to convince colleagues from developing countries that KMC was not the “poor man’s’’ alternative. The MCPMP of the WFS decided to include the monitoring of the impact of KMC on the health of the LBWI nationally and internationally in its activities. In Erice, in August 2002, during a small workshop grouping KMC professionals from Asia, Europe, South America and Asia, a universal KMC database was designed.
349
350 Our idea was to give professionals working in W C worldwide a practical tool for collecting their data and for evaluation of their own practices. During the year, the database was corrected, translated and built into a free access programme that is easy to use: EPIDATA, a Danish software program that can be downloaded legally and free of charge. The next step has been the mailing out of the database with short instructions to 10 teams in Africa, 11 teams in East Europe and Asia and numerous teams in the USA, Europe and South America. We are still teaching the use of the database by mail. We plan to meet in 2004 and to begin the analysis of these data. Our goal is to try to define what is the best way to practice KMC, in accordance with the setting, to obtain the best results for the health of these fragile babies we call premature, or LBW infants. The 2003 meeting of the WFS will be the occasion to work on a joint workshop with the Infectious disease PMP on a specific subject: Ethics and HIV and Mother-toChild Transmission. Recommendations will be made and submitted to an International Pediatric Journal for publication. Pr. Guy de ThC will present the summary of the joint meeting.
LIMITS OF DEVELOPMENT PERMANENT MONITORING PANEL REPORT HILTMAR SCHUBERT (Member and Chair) Fraunhofer Institute for Chemical Technology, Pfinztal, Germany JUAN MANUEL BORTHAGARAY (Member) University of Buenos Aires, Buenos Aires, Argentina GERALD0 G. SERRA (Member and Meeting Coordinator) University of Sao Paolo, Sao Paulo, Brazil
K. C. SIVARAMAKRISHNAN (Member) Centre for Policy Research, New Delhi, India ALBERT0 GONZALES-POZO (Associate Member) Universidad Autbnoma Metropolitana, Xochimilco, Mexico
JIN FENGJUN (Invited Speaker) CHRISTOPHER D. ELLIS (Member and Report Editor) Texas A&M University, College Station, Texas, USA A meeting of the Permanent Monitoring Panel was held on August 19, 2003 at the Chalonge Museum - Patrick M.S. Blackett Institute. In addition to the authors listed above, the following members were in attendance: Members: Margaret S. Petersen (Hydrology & Water Resources, University of Arizona, Tucson, Arizona, USA), Associate Members: Bertil Galland (Writer and Historian, Buxy, France), Mbareck Diop (Science & Technology Advisor, Dakar, Senegal), Zenonas Rudzikas (Lithuanian Academy of Sciences, Vilnius, Lithuania). SCOPE OF THE MEETING: URBAN MOBILITY The theme of this year's meeting was urban mobility, understood as the general set of facilities and issues related to the movement of people and goods. Like biological organisms, megacities depend on a circulatory system that enables the mobility of people for education, healthcare, employment, social and cultural interaction, the delivery of food and resources, the movement of emergency and maintenance service personnel, and the removal of exports and wastes. Breakdowns in this circulation system can result in diminished human health, safety, and quality of life. In large urban agglomerations, increased distances between residents and productive jobs, resources and activities, and waste disposal sites are putting greater demands on the circulation system. Rigid zoning and development reguli'ions, while intended to protect health and safety, tend to
351
352 exacerbate problems of movement between points on the urban net. Increases in size and complexity of urban agglomerations make traffic and transportation one of the most difficult problems confronting city managers today. In megacities, millions of unproductive hours are spent each day in traffic jams and poorly managed transit systems. Human respiratory illnesses and global climate changes are directly affected by pollution emanating from motorcycles, cars, trucks, and buses. The effects of management strategies are often poorly understood and largely experimental. Unacceptable increases in traffic accidents, leading to injury and death, call for further research into the safe design and management of transportation systems. At the same time, many communities seem unable to provide enough financing and political will to implement reforms and improvements, at least at an effective rate. DEFINING THE PRIORITIES TO BE PURSUED IN RELATION TO MOBILITY EMERGENCY This PMP has been working on the hypothesis that the growth of megacities around the world constitutes a planetary emergency. Urban mobility, along with water availability and supply, sewage systems, water pollution, and solid waste collection and disposal, constitutes the framework of an emergency. Here, mobility is examined as a critical component in the functioning of megacities. The PMP members- distinguished by their international representation of diverse megacities-determined that the critical situation of traffic and transportation should be characterized and quantified in order to evaluate the emergency. Hence, we identified a list of priorities for research in urban science and technology related to mobility. Although mobility could be understood as a social category, this meeting concentrated on the general issues related to accessibility within large urban agglomerations. In any case, accessibility can be evaluated in terms of certain costs such as health problems, environmental impacts, productive time lost, and financial resources spent. The urban transportation system is a part of the urban system as a whole, which includes other aspects such as population structure and dynamics, socio-economic characteristics, environment and-in particular-land use. Therefore, the consideration of an urban transportation system cannot ignore these systems with which it is interrelated. In preparation for discussions at this year’s PMP meeting, seven papers were written and shared among the panel and invited presenters. The megacities represented in these papers include Buenos Aires, Sgo Paulo, Mexico City, Beijing, London and the European Union, Mumbai and other cities in India, and Los Angeles. Below is a summary of findings from these case studies. FINDINGS FROM SEVEN MEGACITIES Beiiing. China The city of Beijing and other cities in China are experiencing an accelerated rate of growth. In the past decade, the growth rate rose from 26.2 percent to 36.1 percent. Simultaneously, the growth in the national economy (9.3%) has led to an increase in automobile ownership (13% annually). Fueling this growth in ownership is a perception
353 that automobiles symbolize prosperity. One of every twelve residents now owns a car in Beijing. The rapid growth in automobile ownership and use has placed a heavy burden on the cities’ infrastructure. Increases in highway capacity lags behind increases in automobile use and has caused an increase in congestion. Conflicts between motorized and non-motorized traffic on the same roadways adds to the congestion problem. The supply of parking facilities at destinations is currently inadequate to meet demand. Facility design and management strategies are being developed to 1) increase highway capacity, 2) separate motorized and non-motorized traffic, and 3) expand public transit. At present, however, a comprehensive system-wide management approach does not appear to have been implemented. The environmental impacts attributed to transportation effects are high, but advances in emissions control show signs of improvement. The primary pollutants include high levels of carbon monoxide and hydrocarbons. It is unclear if the pollution from an increased volume of private automobile use will offset these reductions in per unit emissions. Several socioeconomic factors are related to issues affecting urban mobility including the adoption of the suburban living model that has evolved around western cities. The building of radial highways outward from the city core and movement of industries to the city perimeter enabled this model to thrive and is now producing conditions of urban sprawl. Commuting patterns from these suburbs puts additional demand on highway capacity by increasing the total vehicle miles traveled. Mumbai and cities in India About 28% of India’s 1.27 billion people live in cities. Of them, 285 million (40%) are in 35 cities with populations of over 1 million people. In four of the largest cities with populations ranging from 5.6 to 16.3 million, the total number of vehicles increased between 116% and 37 1% over 15 years. In Delhi, 1 of every 13 residents owns a car and 1 of every 6 owns a two-wheeler. The growth of the two-stroke engine scooters and motorcycles has been remarkable. Currently, India produces about 400,000 of these vehicles per month of which only 4% are exported. Convenience, low cost, and the decline of public transport are responsible for this proliferation. Congestion in most Indian cities results in part due to minimal space devoted to roadways (as low as 6.5% in Calcutta and 16% in Delhi). Evidence of the severity of congestion problems can be seen in a high number of accidents and fatalities (around 80,000 in the year 2000). While public transit options might alleviate congestion, there is hardly any public transport in Bangalore, the fastest growing city. Commuter trains, buses, and limited metro rail service account for 50 to 70% of the trips in other cities but the limited number of vehicles in the public transport fleet has resulted in acute overcrowding. Plans to expand transit options in some cities have begun with increasing public support, but the systems are expensive and have been slow to develop for various administrative reasons. Capital funding, whether government grants or private loans, is biased toward private transport vehicles and infrastructure. Much less is available for public transport options adding further complexity to the problem. The rise in vehicular use has led to serious air quality problems. Suspended particulate matter (SPM) levels exceed 200 micrograms per cubic meter in the 5 cities
354 reported. Civil Society groups have documented claims that respiratory diseases now affect 30% of children in the city of Bangalore. Similar data for other cities and active lobbying from public groups has been necessary to overcome administrative fragmentation. However, recent legal direction by the Supreme Court has resulted in a ban on lead fuels and in an accelerated adoption of public buses in Delhi that bum compressed natural gas as fuel. In addition, anticipated gains in vehicular fuel efficiency are in some measure expected to offset the increased pollution resulting from the rise in private vehicle use. Institutional fragmentation of responsibilities for road and rail transit infrastructure, fleet additions and usage, traffic management, energy pricing, etc., has been a major obstacle in addressing problems of mobility. The policy of promoting uncontrolled growth of private vehicle infrastructure-sometimes disregarding norms of energy efficiency-at the expense of public transport operation, has been a significant policy failure in Indian megacities. Mexico Citv, Mexico One of the largest of the megacities, Mexico City is home to 8.5 million residents within the city core, 18 million within Mexico Valley, and a total of 23 million within the larger megalopolitan area. Recent data shows that daily trips per resident increased from 1.35 to 2 from 1983 to 1994. Travel times average between 30 minute and 2 hours for about 50% of the population. There was also an increase in automobile ownership from 1 vehicle in 6 persons to 1 in 4 over a ten-year period. The NAFTA free trade agreement will eliminate high taxes on vehicles and is expected to result in an additional spike in private auto sales. The transportation infrastructure occupies 27.5% of the land surface of which 6% are main highways. Surface conditions are well maintained on the main highways but present deteriorated conditions on the secondary network. Traffic signaling is generally or even complete gridlock. absent leading to congestion that can slow to 5 or 10 -our Efforts to bridge over the most congested intersections failed to produce adequate results due to the existence of congested conditions at either end of the ramps. Public transportation accounts for 8 1.2 percent of all trips, with collective taxis and microbuses accounting for 54% of the total, followed by metro rail at 13.9% and buses at 10.3%. Automobile trips averaged 17.4 percent, with bicycles at 0.8 percent. Inconsistencies in bus transportation policy over time have led to an increase of privately owned collective taxis and minibuses. Recent efforts to implement full size public buses have yet to yield tangible results. Metro lines were first introduced in 1969 and have enjoyed consistent expansion through 1999. A rapid increase in ridership began to stabilize in the 1990s with a slight drop in 1994. The average vehicle age was 13 years in 1995 and is related to problems of high fuel emissions and consumption. In 1996, lead free fuels were introduced leading to better fuel efficiency. However, the impacts of transportation on air quality are severe accounting for 75% of all emissions. An introduced Metropolitan Index of Air Quality (IMECA) rates on a scale from 0 to 500 with a score over 100 considered unsatisfactory. A score over 200 has serious health implications for children and elderly. Beyond 300, services and activities within the city begin to shut down. Tied to this scale are
355 regulations for automobile use such as daily bans, which become stricter as the score rises. Buenos Aires, Argentina The city and metropolitan area of Buenos Aires holds a population of approximately 12 million people. Recent changes in the political climate of Argentina have left the future direction of the city’s transportation plans in flux. Historically, management of the system has shifted between public and private initiative, primarily due to ineffectiveness on the part of public managers. Private initiatives tend to favor wealthier residents. Infrastructure investments currently target improvements in support of automobiles, in part because private ownership has increased to an average of 2.2 persons per vehicle in the City of Buenos Aires (3.8 metropolitan average). The public/private split changed from 66/17 percent in 1970 to 43/37 percent in 1997. Radial highways have enabled the growth of gated and country club suburban communities within a 60km commuting distance of the city, increasing the total daily vehicle miles traveled. The primary losses in public transport have been in bus ridership (15,000 metropolitan area buses). Railway use, however, has increased (subway use declined but has now stabilized). Taxis and remises (35,000 total) play an important and growing role as a relatively cheap and safe alternative to public or private transport means (up to 8% of total trips in 1997). Many of these vehicles operate on liquid natural gas. Bicycle use is insignificant while motorcycles are used mostly for messenger and delivery services. Buenos Aires is the main container freight port in Argentina and moves imports primarily by trucks rather than rail (only 5% share). Little organization is provided for intermodal operation or cargo consolidation and splitting, with the exception of the internal organization of the largest private companies. Scarce or no regulations exist to coordinate the movement or size of freight vehicles during peak travel times, exacerbating congestion.
SBo Paulo, Brazil SBo Paul0 is a city of 12 million residents with 18 million living in the metropolitan area. The ratio of cars to residents in SZo Paul0 ranges from 1 in 2 the city to 1in 3.5 in the metropolitan region. These ratios have stabilized over time. Highway safety issues appear to have been successfully addressed as recent traffic accident death rates have decreased by nearly 50 percent. The primary strategy of expanding road services includes building underground metro lines, but the financial costs are considerably higher than alternatives. Management for congestion in the city includes banning certain vehicles on designated days. Despite high ownership in private vehicles, public transport is the dominant mode choice within the city. Bus travel is the primary mode choice but the percent share is declining. The Metro system is successll and expanding. A suspected link exists between increases in Metro use and the decline of bus ridership. Freight truck use has also increased while freight train use has declined. Congestion due to the increased freight load on highways has been partially offset by the construction of ring roads that tend to be used more frequently by trucks.
356
Contributions to air pollution is attributed primarily to automobiles, as the metro trains run on electric power, and buses and taxis are adopting compressed natural gas and other alternative power choices. Stricter emissions standards and the adoption of cleaner automobile fuels such as grain alcohol continue to show promising results in improving air quality. As older vehicles are decommissioned, further reductions in emissions are expected. Forecasts show increases in private car trips, trip costs, travel times, and carbon monoxide, and decreases in public mode share, traffic speeds, and low-income access to goods and services. Plans to expand highways, railways, light rail, and designated lanes have been made to address demand forecasts. However, financing is not expected to be available to fund all planned projects. Despite forecasts and planned investments in infrastructure, the number of trips per person has decreased from 2.06 to 1.87 from 1987 to 1997 indicating a reduction in mobility. This trend is not well understood and deserves to be researched in greater detail as it may indicate that some form of development threshold has been reached. London and the European Union London is a city of over 7 million inhabitants. Each day, 250,000 vehicles carrying 320,000 people converge on the downtown. To manage daily congestion, London has implemented a “congestion charge” initiative that is enforced using imaging technology which can read and match license plates to approved lists in 23 locations. Average vehicle speed is expected to increase from 13 to 18 km/h while volume is expected to drop 20%. Special exceptions are made for individuals with physical disabilities. As a result of this program, an estimated 60% of retail outlets report a direct loss in sales. However, one forecast estimates a loss of 10 billion pounds per year simply due to congestion. Revenues from the congestion charges will be used for related transportation management research. In addition to private vehicle traffic, approximately 1 million people enter the city by rail between 7 and 10 am daily. About 85% of the total volume of traffic is made on public facilities although conditions of these facilities are reported as having exceeded their design life and are in need of repair. There have been no expansions to existing subways since 1945. About 9,000 buses exist serving 240 million passengers annually with plans to upgrade 70 key routes. A program for establishing step free access to 100 train stations and a large number of buses is being implemented to better serve disabled individuals. Los Angeles. CA, USA Transportation in the United States is big business. Over $247 billion (US) is earmarked by the Federal government alone for surface transportation projects over the next 6 years. Los Angeles, a city of about 3.7 million people (9.8 million in Los Angeles County) spends nearly 4 billion (US) per year on transportation, including federal (17%), state (12%), and local (71%) funds combined. Over 6 million vehicles were registered in LA County in 1998 while only 5.4 million people were licensed to drive. Fifty percent of the highways and arterial intersections operate at “overloaded” conditions during the morning and afternoon commute hours. In response, The LA Department of Transportation (LADOT) has renewed a Congestion Management Plan in
357 2002 (first adopted in 1992) that includes provisions for expanded bus service, light rail, High-Occupancy Vehicle (HOV) lanes, signal coordination, and promotion of ridesharing, walking, bicycle riding, and telecommuting. A short-range plan integrates proposed actions with progress monitoring, funding, and annual plan evaluations to enable flexibility during implementation. LADOT operates a transit system of 3000 buses that serve 1.5 million passenger boardings per day. These vehicles operate on compressed natural gas, propane, and electric-hybrid. Thirty eight percent of the annual transportation budget is allocated to bus operations. Transit service improvements over the past 10 years account for up to 10% of mobility increases. LADOT plans to purchase vehicles with no-step accessibility for the physically disabled and currently has dial-a-ride service for these individuals. Daily system management includes Intelligent Transportation Systems (ITS), a management control center, freeway service call boxes, and education programs. ITS uses computer technology to optimize signal timing, provide signal priority to transit vehicles, provide real-time transit dispatch management, and provide real-time congestion and accident information for the traveling public. The management control center links pavement sensors and closed circuit cameras that monitor the transportation network to an integrated, multi-agency center for real-time, comprehensive management. Freeway call boxes provide public security and enables faster emergency services. Education programs promote highway safety and alternative mode opportunities. CONCLUSIONS Urban populations rely on systems of mobility to support their basic daily needs. These include access to healthcare, education, human services, employment, food, and other essential human needs. Adequate provision must also be made for movement of goods, emergency and maintenance services, and waste removal vehicles. Economic development is dependent on transport systems for moving imports and exports. Seven members of the PMP studied global megacities ranging from developing to developed countries. These were evaluated for sustainable urban mobility in terms of: Health problems (respiratory sicknesses, safety), Environmental impacts (air, land, and water quality, noise, climate change), Service levels (equitable development of public and private infrastructure), Congestion (productive time lost to low speeds and high travel times), and Financial priorities (maintenance, equitability) Through these case studies, a number of important issues were identified that are currently affecting the sustainability of urban transport around the world. The most important of these has to do with the rapid increase of private vehicle ownership. Uncontrolled growth of private vehicle ownership in developed and developing countries is overloading transportation systems beyond traffic management controls. Ineffective management strategies and inequitable investments in infrastructure have led to highly congested roadway conditions with few or no alternative modes of travel to turn to for relief. The PMP members have agreed to the following prescriptions for improving the effectiveness of urban transport systems:
358 Reduce PRIVATE vehicle use Set priorities for PUBLIC transport in central cities, in safety and reliability of service, in road space use (e.g., create separate bus lanes), in development spending, in taxation. Improve urban planning by implementing transportation-land use inteaation strategies, by separating mixed cargo transport, by developing multi-modal policies, for pedestrians, bicycles, buses, trains, for high-occupancy private vehicles, for river and coastal area transport, by improving traffic control and management. Improve air, land, and water environmental quality by reducing toxic emissions including: ozone, particulate matter, carbon monoxide, nitrogen oxides, volatile organic compounds, by promoting clean energy consumption alternatives. Urban mobility was found to be a global problem affecting cities, metropolitan areas, megacities, and megalopolitan areas. Immediate action on these matters is needed.
URBAN MOBILITY IN THE MEXICAN METROPOLIS DR. ALBERTO GONZALEZ POZO Universidad Autonoma Metropolitana Xochimilco, Mexico Mexico is a country of approx. 100 million inhabitants in the year 2000 with three main metropolitan complexes: the Megalopolis of Central Mexico and the Metropolitan Areas of Guadalajara and Monterrey. The Megalopolis of Central Mexico, with 23 million of people, comprises five metropolitan areas around a sixth huge one: the Metropolitan Area of the Valley of Mexico (MZVM), with 18 million inhabitants. The MZVM consists of a central city (Mexico City or Federal District) with 8.5 million people, surrounded by another 9.5 million living in fifty-five municipalities of the neighbouring State of Mexico and one municipality in the State of Hidalgo. TOTAL POPULATIONOF MEXiCO = 97'361,711 inhabitants
\ I
OF VAL1-EY OF MEXICO 17'948,313 inh.
MEXiCO (CAPITAL) CITY' = 8'591,309inh.
/
\
/
TOTAL-, URBAN-, MEGALOPOLITAN-, METROPOLITAN- AND CAPITAL POPULATION IN MEXICO, 2000
359
360 The distances in this huge human settlement of 154,710 hectares are large enough: the largest about 70 Km. north-south and northwest-southeast; the shortest about 30 Km. east-west The distances from downtown to periphery range from 20 to 40 Km. This affects the average time of movements within Mexico City and in the Metropolitan Area too. According to different forecasts, on the horizon of 2020 the Mexican nation will have between 122 and 130 million people; the Megalopolis of Central Mexico between 34 and 38 million; the Metropolitan Area of the Valley of Mexico somewhere between 22 and 36 million inhabitants; and Mexico (the Capital) City will stay between 9 and 10 million. The urban area will grow accordingly, to a surface of 185 000 hectares.' Thus, the problems of urban mobility in this huge megacity are growing as fast as its population growth, its increasing urban extension and the lack of appropriate policies of public transportation. INCREASING MOBILITY PATTERNS Mobility of the metropolitan population increased from 19 million stretches/travels/person/yearly in 1983 to 30.7 million stretches/traveVperson/yearly in 1994. The daily index per person also increased in the same period from 1.35 to 2.0 traveMperson daily, according to a city document of 1996.2 Three types of movements are identified within the metropolitan territory: 56.3% stay within the limits of the Central City or Federal District, 23.1% stay within the limits of the surrounding municipalities of the neighbouring State of Mexico, 20.3% represent movements within both territories and only 0.3% are movements to the rest of the Megalopolis of Central Mexico, as studied by Islas Rivera.
Studies by the same author show how different means of transportation take care of all the movements. The public transportation systems are responsible for 81.2 % of the movements, private cars cover only 17.4 % of them, bicycle and motorcycle OX%, and
361
other means (on foot, mostly) 0.6%. Therefore 81.2 % of public transportation movements is divided in several subsystems as represented in the following table. Travels/Person/Day in Public Transportation Subsystems (Simplified per Islas Rivera,”5.5.. .” opcit)
travels/person/dayy (millions)
TYPE OF PUBLIC TRANSPORTATION ~~
%%
Collective Transportation System Metro
3.234
13.90
Trams, trolleybuses
0.131
0.60
2.3681
10.30
Collective taxis and microbuses
I I
Taxis
I
TOTAL PUBLIC TRANSPORTATION
I
Urban and suburban buses
1 0.568 I 18.811 I 12.510
54.00 2.40 81.2
THE MULTIPLE REASONS AND OPTIONS FOR MOVEMENT IN THE MEGACITY A recent survey conducted in 1996 by Navarro and Guevara shows that urban mobility in the megacity cannot be reduced to abstract notions of spatial displacement between origin and destination points: It is, rather, a multifactorial phenomenon that includes economic, social and urban processes where the work- and study schedules for the different members of a family, their respective roles and their income are of great importance in the complex pattern of urban transportation. The transportation reasons for the head of family, wife (or husband) and sons differ as shown in the following table: MAIN REASONS’FOR URBAN TRANSPORTATION IN THE MEXICAN MEGACITY (Simplified per Navarro and Guevara, op. cit.) ~~
FIRST REASON OF TRAVEL TO Work (and second work) Study To leave/pick children to/ftom several destinations Shopping, services, recreation, other None
1 I I
I
1
FAMILY HEAD 78.23 % 0.61
I
%I
I
WIFE(^^ HUSBAND)
29.22 % 0.89
%I
I
1
I
SONS 40.38
%I
50.92 %
1.21%
8.73 %
9.79 %
29.34 %
3.47%
11.16 %
28.25 %
4.69 %
0.54 %
I
I
362 The study identifies, too, the importance of public versus private transportation systems in the Megacity. Nonetheless, the family head tends to use the car more than the rest of the family.
Types of Transportation choosen by facitlity toe megicaity (simplified per navarro and Gugvar etc) TYPES OF TRANSPORTATION USED (gTH. FAMILY HEAD STRETCH)
I Private car I Taxi Collective taxi or minibus Bus (urban and suburban)
I
I
I I
Tram, trolleybus Metro Private (Company or School) Bus
1 Bicycle and motorcycle I By foot
I I
I
%I 1 40.70 % I 24.22 % I 8.14 1.16 %
REST OF FAMILY
%I
2.53 0.51 % (
%I 19.70 %I
57.07
1.16 %
1.52 %
23.26 %
14.65 %
I 0.00 % 1 1.16 I 0.00 %
%I 0.00 % 1 3.03 %I 1.01
The rest of the survey shows much more interesting details, among them: The peak hours are between 06:OO to 08:00, 13:OO to 15:00, and 17:OO to 19:OO hours (with differences between the family head and the rest of family members according to their schedules); The average travel time in a typical day for 50% of the population is between 30 minutes and two hours (with 3% of people spending more than two hours daily in transportation); More than half of the population surveyed uses public transportation only five days a week; The weekly public transportation average cost per family head is lower in the Central City (between 4.00 and 13.60 US$) and higher in the Metropolitan periphery in the State of Mexico (between 6.00 and 23.70 US$). The cost is lower for the rest of the family members, both in the Central City and the Metropolitan periphery.’ A LARGE AND FAST-GROWING MOTOR-VEHICLE FLEET As already shown, the private car has low transportation efficiency, but it plays a significant role in the size of the vehicular fleet in the Mexican Megacity. The vehicle fleet for the Metropolitan Area of the Valley of Mexico in 1998 was approx. 4 million vehicles and represented the 35% of the country total. Only five years
363 later, in 2003, the fleet has grown to 5.4 million vehicles7. Supposing that there were 4.5 million vehicles in 2000, it gives an average of 1 vehicle for every 4 inhabitants. Ten years earlier, in 1990, the average was 1 to every 6 people. During most of the XXthCentury, automotive vehicles were a luxury for the great majority of the Mexican population. The price of vehicles was almost the double of those sold in USA or Europe because of high import taxes and registration fees. Since Mexico joined the North America Free Trade Agreement (NAFTA) in 1994, import taxes are expected to disappear next year, thus stimulating the car market to a greater extent. The distribution of vehicles according to their function is as follows (according to Mas, op. cit) : VWKXE FLEET DISTRIBUTION(%) Lorries and Freight
7%,
AVERAGE AGE OF THE VEHICLE FLEET The average age of the vehicle fleet is evolving. In 1995, the average age of private cars was 13 years, thousands of them more than 40 years old. A large amount of taxis and other vehicles of public transportation were 20 years old or more.8 Lorries and freight cars were in the same situation. But from 1998 to present the situation is better. The acquisition of new vehicles has been encouraged by the market facilities (more credit incentives at fixed prices) and by the help the City Government gives to the owners of public transportation vehicles. The average age of cars is now relative to the income status of the owners. Brand new vehicles circulate and stay mostly in high or mediumhigh income areas of the Megacity, while older models are frequently seen in popular areas or even in squatters’ settlements.
FUEL CONSUMPTION AND COST In 1997, the daily fuel consumption in the Metropolitan Area of the Valley of Mexico (MAVM) reached 43.3 millions of litres, distributed as follows:
364
Industrial 25.74% Domestic 9.83% 0 Transportation 54.23%
Electr. Generation 10.20%
The quality of fuel for transportation has improved since 1996, when lead-free fuel was introduced on the market. At the same time, there have been improvements in the motor- and fuel efficiency. The car fleet has grown, but the consumption was lower in 1997 than in 1994. MAVM FUEL CONSUMPTION ) .
$
26.00
z E
22.00 20.00
$
24.00 -
E .- 18.00
IE
1
2
3
4
5
6
7
1991-1997
There are three main fuel categories and their respective prices are presently as follows: TYPE OF FUEL Diesel (low sulphur) 0 Fuel of 80 octane (unleaded) 0 Fuel of 95 octane (unleaded)
PRICE (US$/liter) 0.53 0.57 0.63
THE ERRATIC POLICY TOWARDS THE BUSES AND MINIBUSES. Since the start in 1930, the bus systems were in hands of licensed private entities. But between 1970 and 1990 the city government made a strong effort to control the whole bus system. The experience failed, and this gave place to the proliferation of small owners of collective cabs that began to function in 1950, the so-called “peseros” (because the fare was equivalent to one Mexican Peso, the official currency). Many of them
365 evolved gradually from 5 passenger cabs to adapted VW Combis with 10 assengers and finally to minibuses of 20 to 25 passengers. At the start of the XXI century, this inefficient system plays a big role in the movement of city dwellers of low and middle income. Many of the traffic accidents and the insecurity of public transportation are attributed to minibuses. Most of them are old and they are also responsible for atmospheric pollution. Only recently, the City Government has begun to try to reshape the system of public buses of 40 to 50 passengers. The situation in the municipalities of the metropolitan surrounding periphery is even worse.’
P
366 Buses - Subsystem in the Central City, 1997 (Simplified according to Islas Rivera, "5.5.." op. cit.)
FEATURES ANALYZED Routes Length (Km) Vehicular fleet Vehicles in operation Daily passengers (millions)
1997 176 5,934 (4071 in 1994) 2,780 1.9
THE METRO AND OTHER "CLEAN" SYSTEMS IN THE MEGACITY
The Collective Transportation System-Metro has evolved over the last three decades. Between 1969 and 1972, the first three underground lines totaled a length of 41.4 Km. A second stage between 1978 and 1982 added two more lines and some
367 extensions with another 38.0 Km. thus totaling 79.4 Km. The third stage added three more lines and other extensions to the previous with a length of another 61 Km. A fourth stage between 1991 and 1994 added another line, 37 Km, and a total of 177.41 Km. of lines in service. The last effort was made in 1999, with a new line and 23.7 Km. The complete system now consists of: 11 lines 167 stations 189.41 Km” Besides these, there are two complementary lines of the so-called “light tram”, a tramway system of exclusive lanes each one of about 10 Km in length.
The number of passengers on the Metro System increased steadily between 1972 and 1988 from 400 million passengerdyear to 1,500 million passengerdyear. From 1990 onwards the demand was stabilized and even started to decrease, with 1,422 million passengers in 1994. Other “clean” collective transportation systems such as electric trams and trolleybuses now play a minor role in the effort to move millions of people. It is sad to mention that Mexico City had a very efficient and advanced tram system until the first half of the XXth. Century. Then, the tendency went to bus transportation and the dense tramway network was dismantled in the decades of 1960 to 1970. THE STREET SYSTEM: LIKE THE NEVER-ENDING PENELOPE FABRIC The street system for a metropolis of 155,000 Hectares uses a 27.5% of that surface, that is nearly 42,OO Hectares that must be paved and maintained. Expressed in terms of length, the street network has an extension of about 10,437 Km., 89% located in the Central City and 11% in the metropolitan periphery. In the Central City, 6% of the street network is comprised of main streets, as shown in the following table”:
368 Network of Main Streets in the Central City (simplified, according to Laboratorio de la Ciudad de Mexico, 2000)
TYPE AND NUMBER OF MAIN STREETS
LENGTH (Km)
9 Main rapid-transit expressways
164
I23 “Ejes viales” or orthogonal main avenues
I
332
1 105 1
1 10 Other “historical” main avenues 1 Total main streets
I I
602
I
The road paving in the main streets is fair, but in the secondary network has great differences that reflect the social status prevailing in each of the city neighbourhoods. In many places it is rough, full of holes and bumps that force experienced drivers to reduce their speed and damage the cars of careless drivers. In the rainy season, there are earth streets that flood and can become dangerous traps, too. The road system reflects on the different attitudes towards vehicular transit throughout the XXthCentury: Most of the “historical” main avenues are in fact suburban streets laid out in the XIXth Century for promenades. Others were opened in the first half of the XXth Century, or are prolongations of other streets. Most of them have sections of between 30 and 40 meters. They allow average speeds between 40 and 60 Km/hour (except in rush hour).
369 The orthogonal system of “Ejes Viales” (Transit Axes) was adapted between 1980 and 1982, using wide sections of the existing network or widening it where necessary to sections of between 30 and 50 meters. They function very well when the streetlights are properly synchronized, with average speeds from 50 to 70 Km/hour, but descending to 20-40 Km/hour in downtown or in the peak hours. The rapid-transit expressways have sections of between 50 and 70 meters, most of them with separate central and lateral lanes. They have few intersections with streetlights and allow a transit speed of 60-80 Km/hour12 except in peak hours, when some segments scarcely allow very slow movement of 5 to 10 Km/hour. or
even foce a comple still for some minitues.
Recently, the City Government decided to start the program of “Second Storeys” in one of these expressways called Anillo Perifkrico (Peripheric Ring). They built a segment
370 of only 5 kilometers around an important intersection, calling it “el Distribuidor San Antonio” (the San Antonio intersection). It was an impressive and costly realization, with elevated lanes that ascend rapidly to 35 meters high, but the results have been rather disappointing, because once that the vehicle is back to the earth level, it must face the same congestion it tried to avoid high above the ground.
The Mexican experience with these gigantic works has a long story of urban disasters. Bad road engineering is responsible for great road distributors and bridges like the one built in the seventies in the historic center of Tacuba, one of the ancient Aztec settlements that flourished in the lakeshore of the great Lake of Mexico between 1300 and 1521 a.D. TRANSIT INFRACTIONS AND ACCIDENTS AND THEIR RELATIONSHIP TO INCREASING SPEEDS Traffic offenses are frequent, even if not all of them are reported. A 1992 study of the Mexico City Government cited by Islas Rivera13 reports more than 1.3 million infractions as shown in the following table:
371 Transit Infractions in the Central City, 1991 (According to Mas Rivera, “5.5.. .”,2000)
I
TYPEOFINFRACTION
I
I 35,506 I
I
Speed excess
%
QUANTITY
2.71
Parking in a prohibited place
840,995
64.23
Not stopping at a red-light
215,638
16.47
Driving in the opposite direction
73,3 11
5.63
Not keeping to the right lane
10,235
0.79
I I
Other TOTAL
133,204 1’309,289
I I
10.17 100.00
In 1994, a total of 27,264 transit accidents were reported in the central city, 1,455 of them having a fatal outcome. They affect not only the occupants of vehicles, but the pedestrians too. ENVIRONMENTAL IMPACTS OF METROPOLITAN TRANSPORTATION SYSTEMS The atmosphere of the Mexican Megacity is considered to be one of the most polluted in the world. The estimates of 1989 and 1994 identified emissions of 4.3 and 4 million tons of pollutant substances expelled annually into the urban air. The last official inventory from 1996 showed an annual emission of 2.7 million tons of poll~tants.’~ Since then, the episodes of serious pollution contingencies of the eighties and the early nineties have diminished dramatically. But notwithstanding the lower quantities, the size of the pollution is still a matter of serious concern for the federal and local authorities.
Inventory of Pollutant Emissions in the Megacity (Tondyear) 1996 (According to Lemma “6.4..”, op. cit. 2000) TYPE OF POLLUTANT
Total suspended particles PST)
industry services trapsnpo vigita total tataken and soils 7,619
355
16,821
3,587
5,752
Particles of breathable fraction (PM10) Sulphur dioxide (SO,)
7,974 9,497
18,072
27,569 26,170
Carbon monoxide (CO)
10,345
4,526
2’086,938
Nitrogen dioxide (NO,)
29,448
11,006
117,928
500
158,882
Hydrocarbons (HC)
17,693
235,173
68,298
31,390
452,554
TOTAL
81.926
254.647
2’388.423
49.962
2’774.958
2’101,809
372 The table shows that carbon monoxide represents 75% of the total amount of pollutants, and that 99% of it comes from transportation. Nitrogen dioxide, too, plays a significant role in the pollution due to transportation. One of the measures taken from the late eighties onwards is the daily monitoring and public diffusion of the Metropolitan Index of Air Quality (IMECA for the initials in Spanish). The index has a scale from 0 to 500 units. If it stays under 100, the air quality is considered satisfactory, without consequences to health: between 100 and 200 it is considered unsatisfactory, with minor consequences to health. If the scale climbs to between 200 and 300 units, the air quality is considered bad, with possible serious consequences to the health of children or senior citizens. Finally, if the range is between 300 and 500 units, the air quality is considered very bad, with bad consequences even for healthy people. For each range there are appropriate countermeasures that must be automatically applied: between 0 and 100, old cars without catalytic converters must stay at home one day each week; between 100 and 200 they must stay two days, and new cars one day, plus some measures affecting industry. Higher ranges start emergencies with serious consequences to cars, industries and people’s behaviour (children cannot attend school, for instance). Besides, each car must be checked and have its emissions certified twice a year, with severe penalties for those who do not comply with the procedure. SOME CONCLUSIONS, LOOKING AHEAD The next two decades will see the transit from the Mexican Megacity of 18 million people to the Megalopolis of Central Mexico, a vast urbanized region with 36 to 38 million inhabitants. Mobility in that huge system will face another scale of challenges, but the actual problems must be solved first: The public, non-pollutant transportation systems must be brought up to date and extended. The Metro system first of all. But the trams and trolleybus subsystems can be improved too. The next step is the construction of suburban fast railroads connecting the different metropolitan zones within the Megalopolis. A serious effort must be made to organize buses and minibuses. Minibuses can be used for small segments in the periphery and old quarters with small street sections or steep roads. Buses with advanced technology may be still useful if confined to special lanes in main avenues. Both buses and minibuses must be changed to use natural gas fuel. Serious management practices that include weekly tickets and free transfers between different means of transportation may give more opportunity to the public transportation systems. The idea of building second storeys over the main expressways must be carefully revised. They favor a small segment of car drivers while there are many more improvements to the road network that are still missing: better intersection solutions (including bridges and pedestrian bridges), better synchronization of street-lights, better and more durable pavements. Besides, it is necessary to start a master plan of new roads in the periphery, especially those that inhibit the unnecessary entrance to central city of vehicles that only pass through the metropolitan area towards another destination.
373
Facilities to encourage local or short-distance movements that can be made by bicycle or motorcycle (such as special lanes) must be introduced in all the metropolitan extension. Pedestrians also need more special roads and bridges, accessible to impaired people.
NOTES: Demographic data for 2000 are from Garza, Gustavo “4.2 Ambitos de expansion territorial” and Porras, Agustin “10.1 Proyeccion demogrifica al afio 2020” , both in Garza, Gustavo (coord.) La Ciudad de Mixico en elfin del segundo milenio, Gobiemo del Distrito Federal I El Colegio de Mexico, Mexico, 2000. Some numbers have been rounded. Ciudad de Mexico Programa general de Desarrollo Urbano del Distrito Federal, Versi6n 1996, Mexico. Islas Rivera, Victor “5.4 Red Vial” in Garza, Gustavo op.cit. 2000 Navarro Benitez, Bernardo and Guevara Gonzilez, Iris, Area Metropolitana de la Ciudad de MPxico: Prhcticas de desplazamiento y horarios laborales, Universidad Aut6noma Metropolitana I Universidad Nacional Autonoma de Mexico / Massachussets Institute of Technology, Mexico, 2000. The cost of the ticket for Metro, buses, trams and trolleys is about 0:20 US Dls. in 2003. The tickets for collective taxis and minibuses start from the same price to higher ones according to distances. Only in the Metro System it is allowed to transfer from one line to another without buying a new ticket. Islas Rivera, Victor, “ 5.4 ...”op. cit. 2000 Interview with Bernardo Navarro and Juan Jose Santibiiiez, “Requiere el transporte public0 respuestas inmediatas” in Semamario de la UAM, vol. IX, No. 42, July 2003, Mexico. * IsIas Rivera, Victor, “5.4....”op. cit 2000. Islas Rivera, Victor, 5.5 “Transporte metropolitano de pasajeros” in Garza, Gustavo, op. cit.2000 l o Navarro Benitez, Bernardo and Bacelis Roldan, Sandra, “5.6 El Metro como sistema de transportaci6n masiva” in Garza, Gustavo, op. cit. I ’ Laboratorio de la Ciudad de Mexico, ZMVM, Mexico, 2000. The maximum speed allowed in Mexico City is 80 Kmihour. Idas Rivera, “5.4 ...”op. cit. 2000 14 Lezama, Jose Luis, “6.4 Contaminaci6n del aire” in Garza, op. cit.,2000
’
’
’
MOBILITY IN MEGACITIES: INDIAN SCENARIO PROFESSOR K. C. SIVARAMAKRISHNAN Centre for Policy Research, New Delhi, India INTRODUCTION India’s total population according to the 2001 Census is 1.27 billion. The urban share of this population is about 285 million, or about 27.78%. This is about 21% of Asia’s and 10% of the world’s urban population. However, these simple averages do not convey an adequate picture of the country’s urban scene. Much of the urban growth is concentrated in large cities. There are 35 cities of more than one million people, which have a total population of about 108 million representing nearly 40% of the urban population. These million plus, or ‘metropolitan cities’ as they are usually referred to, have been growing steadily. At the time of the 1991 Census there were only 24: now there are 35. Those with a population of more than five million are commonly referred to as ‘Megacities’. There are 6 of these at present: namely Greater Bombay (renamed recently as Mumbai), Calcutta (renamed as Kolkata), Delhi, Madras (renamed as Chennai), Bangalore and Hyderabad. The populations of these 6 megacities since 1981 are given in the table below. It is emphasized that these are according to the area defined by Census, which may not cover the full areas of metropolitan agglomerations. Growth Rate of Metropolitan Cities.
I
S1.
Metropolitan
No
CitiesRJAS
Exponential Growth Rate
Population 1981
1991
2001
1981-91
1991- 01
4.22
2.62
1
Greater Mumbai 8,243,405
12,596,243
16,368,084
2
Kolkata
9,194,018
11,021,918
13,216,546
1.72
1.82
3
Delhi
5,729,283
8,419,084
12,791,458
3.80
4.18
Chennai
4 I
5,421,985
4,289,347 I
2.23
6,424,624 I
I
I
1.70 I
5
Bangalore
2,921,751
4,130,288
5,686,844
3.36
3.20
6
Hyderabad
2,545,836
4,344,437
5,533,640
5.20
2.42
I
VEHICULAR GROWTH The growth of motorized vehicles in most of the metropolitan cities has been dramatically high. In the 15 year period between 1980 and 1995 the number of vehicles increased by 334% in Delhi, 229% in Calcutta, 116% in Bombay and 371% in Bangalore. As compared to this, the population increase itself has been 67, 25, 29 and 60% respectively. In metropolitan cities as a whole, the number of motorized vehicles per
374
375 thousand population is 40 per thousand. However, in some cities this ratio is much higher. The table below indicates the current levels as well as growth in recent years. Details of vehicles registered in major cities.
Urban TwoAutos Agglomeration wheelers /tempos /district
Cars /cabs
Goods Buses camages
Tractors
Total
01.04.1985
195210
12375
58971
3812
12217
5881
288466
01.04.1990
415854
15754
85037
4243
18298
6555
545741
5 or 5 or 01.04.1995
594639
34335
120103 6454
24625
14220
794376
01.04.2002
1183752
64520
259001 10077
49037
30171
1596558
01.04.1985
122123
6115
55529
2945
12337
798
199847
01.04.1990
338486
5580
113783 3657
28917
1871
492294
01.04.1995
565451
20849
148896 4317
23475
1460
764448
01.04.2002
988630
44771
250080
31459
6202
1325683
01.04.1985
579064
30017
166263 13522
52370
0
841236
01.04.1990
1113236
58934
354810 17844
92778
0
1637602
01.04.1995
1617732
74981
588309 26202
125071
0
2432295
01.04.2002
2265955
86985
989522 47578
161650
01.04.1985
167338
32351
236186 22506
46840
3427
508648
01.04.1990
305099
41814
302122 10878
75405
3623
738941
01.04.1995
434802
101597 313537 16291
83517
6070
955814
01.04.2002
787527
212862 547224 20718
124718
8215
1701264
01.04.1985
160556
4968
177736 15736
62514
10375 431885
01.04.1990
217304
7100
219079 18330
75083
11480 548376
01.04.1995
294110
10146
270039 21352
90179
12703 698529
01.04.2002
467756
27003
380079 28923
105687
28003 1037451
4541
3551690
376 The two-wheeler phenomenon is rather unique to Indian cities. The Lambretta type scooter with the two-stroke engine discarded in European cities by the 70s gained massive acceptance in Asian cities, in particular, in India. From a small number of 27,000 for the whole country in 1951, the production of these vehicles jumped to one million by 1976 and 2.3 million in 1996. In 2000 the number of scooters, including motorcycles produced in the country, reached 3.4 million. Currently India produces more than 400,000 two-wheelers every month. About 16,000 are exported and the rest find their way to the metropolitan cities. Here again, the largest numbers are to be found in the megacities. Delhi is also the scooter capital of the country. Out of a total of 3.55 million vehicles registered between April 1985 and April 2002, nearly 2.27 million are two wheelers. Until recently these vehicles used 2-stroke engines. It is only in the past three years that industry has shifted to 4-stroke engine technology. As indicated in the earlier table, in the other cities of Bangalore, Madras, Bombay and Calcutta, the predominance of the two-wheeler is the same. It is important to understand that scooters and motorcycles represent the quickest change over from public to private transport because of personal convenience. The capital cost, on an average, is about US$lOOO and the market ensures easy and quick financing. The decline in the quality of public transport has also been an important contributing factor. CONGESTION One obvious outcome of vehicular growth and, in particular, the two-wheeler phenomenon is congestion. Most Indian cities have very limited road space ranging from 6 to 10% of the total area. Delhi is an exception with about 16% of the total space utilized by roads, but this proportion varies from one part of the city to another. Congestion is, therefore, widespread and vehicle speeds on an average are between 8 to 12 kms per hour in the megacities. The congestion factor is also reflected in accidents and fatalities. For the country as a whole about 400,000 road accidents were reported during the year 2000 resulting in about 79,000 fatalities and 400,000 persons injured. The proportion of this in metropolitan cities is high. Figures available for 1998 indicate that in the five megacities considered in this paper, accidents numbered about 60,000 with 4355 fatalities. PASSENGER MOBILITY Given the vehicular growth and congestion we can consider the situation regarding mobility of passengers. It is estimated that per capita trip generation in megacities is about 0.8 to 1.1. The broad break-up of the model split of the transit volume for the 5 megacities is indicated in the table below:
377 Model Split (Percentages).
I
Madras
I
Transit Volume (Million Trips)
1
I
Bangalore
I
3.95
1
I
Delhi
I
3.33
1
I
Calcutta
I
11.95
1
I
9.3
5 or 5 or 55oror Two Wheeler
7
50
30
Three Wheeler
3.8
20
1.7
11
Public transport modes in these megacities have had different histories. In the case of Bombay and Calcutta where growth has been mainly linear, the suburban railways carry a significant volume of transit. The suburban railways are not the same as intercity commuter trains, though they are operated by the same railway companies sharing the track infrastructure. In both Calcutta and Bombay, the suburban trains are electric multiple units, transporting passengers from the suburban through the metropolitan areas and terminating in the central city. Stations are located all along the way and therefore, passengers make use of these trains within the metropolitan area as well. In the case of Madras, suburban train services were developed on the meter gauge in three radial directions. In all these three cities the trains account for nearly one third or more of the transit volume. In the case of Delhi, buses are the principal means of transit. In Madras and Calcutta they account for important shares of transit as well. Because of the limited road space, the need to serve different localities and flexibility in operating schedule, mini-buses have also been introduced in these megacities. However, the volume they carry is still limited. Public policy in regard to mobility in megacities can be considered broadly in 4 categories; f ~ s t l yin regard to the adoption of mass transit modes; secondly, air pollution caused by vehicles; thirdly energy needs and fourthly freight movement. The evolution and status of public policy in regard to all these categories are important factors determining sustainability. MASS TRANSIT The need for mass transit and the scope for its adoption in the country's major cites has been discussed for more than 3 decades. Feasibility studies and project reports are far
378 too numerous to list. However, concrete action has been inverse to the amount of debate and reports. The first decision, to build a dedicated, rail-based mass transit system, was taken in Calcutta in the 1970s. It was to be partly underground and partly on the surface. The initial project envisaged a 30 km system expected to carry about 20% of the transit volume, but what is now operating is a 13 km line used by about 2% of the total volume. Under funded from the beginning, and delayed at every stage, the metro rail in Calcutta reached an operational stage in 1978. In the case of Madras one of the suburban train lines is being extended to the city in stages as some kind of transit facility. In the case of Bombay a World Bank financed transit improvement project is focusing on improving the frequency of the suburban train services, road improvements and improvements in the bus system. Bangalore is still debating its transit options. The first phase of the Delhi metro system covers a length of 62.5 kms out of which 12.5 is underground and the rest on the surface and partly elevated. At an estimated cost of $ 2.2 billion, Delhi’s metro is undoubtedly expensive but thanks to significant Japanese aid, the political clout that Delhi can exercise, and with a competent construction agency the metro is proceeding on schedule. By 2005 the first phase will be able to cater to nearly 2.2 million passenger trips. Though the start was delayed, the project is opening up public and political perceptions about the positive aspects of mass transit. Current debate is about extending the metro system and also incorporating dedicated bus ways and complementary facilities. AIR POLLUTION Given the rapid rise in vehicular growth of many types and perennial congestion, air quality in Indian cities is seriously compromised. The table below indicates the daily vehicle emission loads for the 5 megacities for the year 1994:
1
I
soz
5 or City
Delhi
Calcutta
Bangalore
SPM
8.96 4.03 3.65 2.02 1.76
NO,
HC
CO
Total Tomes Daily
126.46
249.57
651.01
1046.30
70.82
103.21
469.92
659.57
54.69
43.88
188.24
293.71
28.21
50.46
143.22
226.25
26.22
78.51
195.36
304.47
More recent estimates place SPM levels at more than 200 micrograms per cubic meter. Delhi’s fight against air pollution got a major boost through judicial intervention. At the end of a 3 year long Public Interest Litigation, the Supreme Court ruled in April 2002 that all the buses operating in Delhi should use CNG (Compressed Natural Gas) as fuel. The implementation of this order did raise several other issues: like the industry’s capacity to produce vehicles of this type, the supply of CNG and its distribution. However, the Court orders left little room for manoeuvre and the switch over to CNG buses has made steady progress. Earlier, the Court had also banned the use of leaded fuel. Today, unleaded fuel is also the preferred option in other major cities. The automobile
379 industry, prompted by the need to comply with international standards, has also adopted unleaded fuel. Some questions have been raised about the wisdom of judicial intervention particularly in technical matters such as fuel choice but environmental groups have successfully mobilized public opinion in favour of such intervention. Though the nation’s capital has had a long history of subsidies, market distortions and regulatory inefficiency, in dealing with air pollution Delhi has emerged as an example. Civil Society organizations in Bombay, Calcutta and Bangalore are active in documenting and disseminating the incidents of respiratory diseases due to air pollution. For instance, one such report has indicated that asthma among children of Bangalore has tripled in 20 years and now affects 30% of the children. Public Interest Litigation continues to be a favoured approach among civil society organizations. In many of these cases, Court verdicts have helped pollution control agencies to overcome jurisdictional conflicts and inter agency problems since institutional responsibilities for urban transport are highly fragmented.
ENERGY Transport is a major user of energy. Though fuel prices have increased steadily the expanding growth of automobile industry continues to drive the demand for energy. The search for fuel efficiency is continued by the Indian automobile industry in keeping with international trends. The anticipated gains in fuel efficiency in regard to some categories of Indian vehicles are indicated in the table below.
380 Mode:
Technology:
__ -
1990
unit
2000
2010
2020
2030 Efficiency
Fuel 2-wheeler
4-stroke
65.00
66.95
68.25
69.55
71.50
9.43
10.37 11.32
13.20
15.47
-9.92 10.63 8.86
12.40
14.40 15.84
Km/
Petrol
Litre
Car: Petrol
Kmi
~
Litre Car: Diesel
Km/ Litre
Car: CNG
wrn3
1 (gains)
10
~
13.29
50
16.56
17.28
20
~
~
3-wheeler
4-stroke
Km/
Petrol
Litre
Taxi:
Km/
gasoline
Litre
Taxi: diesel
Kmi
23.00
25.30
25.99
27.14
18
10.37
11.32
13.20
15.47
64
1063
12.40
~
9.43
9.92 8.86
~
13.29
50
Litre __
Taxi: CNG
Km/M3
Bus: diesel
Km/ Litre
Light
Km/
Commercial
Litre
~
14.40 15.84 16.56 -3.40 3.30 3.57 3.64
17.28 20 3.74 10
-8.40 8.80 8.00
9.60
Vehicle Diesel
~
-
9.20
20
381
MOBILITY AND GOODS MOVEMENT Before we consider the public policy aspects of mobility in megacities, it is necessary to note that freight, or movement of goods, constitutes a very important aspect of mobility. However, in comparison to the data on passenger vehicles, information on goods transport at the city level is hardly available. The cumulative number of the motorized goods vehicles registered in India as of 2000 is just 2.6 million, compared to the cumulative number of two wheelers, which is about 34 million, and of cars, which is about 6 million. However, their movements within the cities are significant but often regarded as confrontational or secondary. One of the useful ways of dealing with congestion in city streets is to ban entry of goods vehicles during some prescribed hours. Freight terminals, break-bulk facilities and parking arrangements for goods vehicles are treated as a part of land use and town planning and are usually assigned to the outskirts of the cities. However, the material transported by the goods vehicles, of which food and consumables form a significant part, are required in different parts of the city. Integrated city transport policies rarely include the aspect of goods vehicles. This itself leads to serious problems of mobility and affects, in turn,the proper hctioning of the city. PUBLIC POLICY AND MOBILITY In considering public policy responses to mobility the most prominent factor is institutional fragmentation. Roads and their maintenance are usually the responsibly of the public works departments at the state (provincial) level while for some roads it is a city government responsibility. The registration of motor vehicles, licensing and vehicle taxation are handled by regulatory authorities or departments of the province. Regulation of traffic and penalties for road violations are invariably handled by the police which, in Indian cities, is not under city governments. Traffic engineering including signals are shared between city governments and police. The operation of transport vehicles such as buses, trains or tramcars are the responsibility of individual utility companies whether private or owned by the state. The production and sale of automobiles is in the hands of the private sector industry and there are no limits on the number of vehicles that can be sold. Demand management hardly figures in the terminology of urban transport policies. This despite the fact that Singapore, an Asian country, has been a highly successful example of urban transport management. In the matter of development financing the bias is invariably in favour of private transport rather than public transport. Private sector money also goes into the production and distribution of private vehicles mainly. For instance the cumulative number of buses in the country as a whole, as of 2000, is less than 600,000 as compared to the total of 48 million vehicles. Between 1999 and 2000, the cumulative number of buses increased only by 19,000 as compared to half a million cars and 2 million two-wheelers. To aggravate matters, vehicle taxation has been unimaginative and favours private transport vehicles rather than buses. In many states of India private vehicles pay a one time tax only ranging from $25 to 100 at the time of registration which bears little relation to vehicle cost. On the other hand, buses pay a tax calculated on the basis of passengers including standing passengers.
382
International experience in improving urban mobility has brought forth several measures such as augmenting public transport capacity, reducing travel time, improving inter-model integration, priority for public transport etc. On the regulatory side several initiatives are also recognised as useful such as area licensing, limiting parking space and high parking fees, park and ride facilities etc. In London the congestion tax, which was introduced despite many doubts, has proved successful. This example clearly shows the need for public awareness and support for undertaking such measures. The present scenario in Indian cities, however, indicates that urban transport as a subject falls through the gaps between several institutions and is indeed ‘nobody’s baby’. In the meantime market forces will continue to increase the proliferation of private transport and preemption of public choices. The situation may remain like this until congestion and pollution bring movement on the megacity roads to a virtual halt.
WORLD FEDERATION OF SCIENTISTS PERMANENT MONITORING PANEL ON INFORMATION SECURITY HEWING WEGENER Ambassador of Germany (ret.), Madrid, Spain I am happy to report on the status of the work of the Permanent Monitoring Panel, which held a full-day meeting on August 19", and to inform you that the group has now finalized an important piece of its work. It has adopted a document entitled Report and Recommendations. Toward a Universal Order of Cyberspace: Managing Threats from Cybercrime to Cyberwar Last year, the PMP developed a first set of recommendations addressed to the international community which were subsequently distributed to international leaders. In its further work, the Group found that these recommendations retained their full validity and, at this time, did not need to be changed or supplemented, but rather to be argumentatively underpinned. For added clarity and impact, the PMP therefore put together a set of Explanatory Comments to their recommendations. These now form the body of the new paper, together with a comprehensive introduction and a preface by Prof. Zichichi as President of the World Federation of Scientists. The structure of the document is interesting and may give a cue to other Permanent Monitoring Panels. In order to keep the main paper crisp and concise, the bulk of the papers contributed by the members of the PMP on various topics of information security, truly a compendium of the principal issues, will be available on a special website, rather than as an integral part of the printed report. They are thus easily accessible, especially as they are frequently referred to in the Explanatory Comments, without burdening the main document unduly. As the title indicates, the thrust of the recommendations responds to the need for universal solutions to the problems of information insecurity. The Report not only offers a convincing analysis of the damaging potential of cyber attacks on almost all aspects of human endeavour; its Recommendations make the case for urgent international action in the direction of a universal order of cyberspace for which, at this juncture, only precious little provision has been made. They offer an urgent challenge to international decision-makers. Accordingly, wide distribution to the leaders and representatives of the international system, as well as to governments is recommended, and will shortly proceed under the authority of the World Federation of Scientists. Benefiting from the presence of a distinguished member of the Italian Council of Ministers, may I also say that the Report foresees transmittal of the document to the Prime Minister of the Republic of Italy, both as the Head of Government of the host country of the International Seminars and as the current President of the European Union. The World Federation of Scientists would greatly appreciate the support of the Government of Italy in favour of its recommendations. The recommendations are operational, in that they would enable rapid and pragmatic implementation. They are also highly topical. Recent large-scale attacks on the world's information networks (I refer to the Blaster and SoBig virus attacks, intrusions of unprecedented size) have heightened the urgency of action. Managing threats in cyberspace is also an indispensable component of anti-terrorist strategies, the central theme of the work of the WFS this year. All of this warrants the maximum use of public diplomacy to bring the issue of information security to the level of
383
384 operational politics, and to motivate decision-makers to act swiftly and with determination. The PMP - set up as a permanent panel - is convinced that its work does not end with this year’s report. It has identified a number of issues that have not yet been adequately been probed, be it within the WFS or outside. Among these are: The delineation between the requirements of transparency vs. privacy, as well as the need to balance civil liberties and privacy protection against security and law enforcement requirements. Bridging the Digital Divide. The development of adequate methodologies, on the basis of comparative analysis, for risk assessment in the ICT area. An analysis of the opportunities and challenges in the development of wireless systems, and the improvement of the security of wireless services. A review of corporate governance with a view to improved digital risk management. Risk analysis and audit principles. Identification of new research areas for further examination; e.g. application level security, fault tolerance networks, self-healing networks, autonomous response, etc. Strategies for tactical warning; e.g. are these feasible? If so, what do they mean in terms of timeliness and response? Are there tools that could be developed to enable warning? The role of the scientific community in educating politicians, the public, and the corporate world to cyberthreatshlnerabilities and their potential impacts on “life as we know it”. The PMP indeed intends to go deeper into these issues in its next work phase, while also monitoring, on a systematic basis, the progress of implementation of its current recommendations. I am happy to say that we will be helped in these tasks by two particularly qualified new members, Prof. Britkov and Dr. Casciano. We will lose no time in setting about our work. Suggestions from other members of the International Seminars are more than welcome.
Toward a Universal Order of Cyberspace: Managing Threats from Cybercrime to Cyberwar Report & Recommendations World Federation of Scientists Permanent Monitoring Panel on Information Security Henning Wegener, Chairman William A. Barletta Olivia Bosch Dmitry Chereshkin Ahmad Kamal Andrey Krutskikh Axel H.R. Lehmann Timothy L. Thomas Vitali Tsygichko Jody R. Westby
August 2003 The members of the Permanent Monitoring Panel participated in their private capacity and the Recommendations and Explanatory Comments herein do not necessarily reflect the views of their organizations or governments.
385
386
Table of Contents Abbreviations
387
Preface
389
Introduction Background Overview
390
Recommendations
396
Explanatory Comments to Recommendations Recommendation 1 Recommendation 2 Recommendation 3 Recommendation 4 Recommendation 5 Recommendation 6 Recommendation 7 Recommendation 8 Recommendation 9 Recommendation 10 Recommendation 11 Recommendation 12 Recommendation 13
398 398 403 405 409 410 411 412 415 415 417 417 419 425
List of PMP Members
428
390 390
387
Abbreviations AIPAC APEC CERT/CC CIDA CoE COTS DARPA DCS DdoS DoD EBRD EL0 ENU EU FBI FDI FOIA G8 GAO GBDe GIIC HDL IADB IAEA IATA ICAO ICT IEEE IETF Interpol IPv4 IPv6 ISAC IS0 ISOC ISP IT ITAA ITU NATO
American-Israel Public Affairs Committee Asia-Pacific Economic Cooperation and Development forum Computer Emergency Response Team Coordinating Center at Carnegie Mellon University Canadian International Development Agency Council of Europe Commercial off-the-shelf Defense Advanced Research Projects Agency (US. DoD) Distributed Control System Distributed Denial of Service Attack Department of Defense (U.S.) European Bank of Reconstruction and Development European Liaison Officer (Europol) Europol National Unit European Union Federal Bureau of Investigation (U.S.) Foreign Direct Investment Freedom of Information Act (U.S.) Group of Eight General Accounting Office (U.S.) Global Business Dialogue on Electronic Commerce Global Information Infrastructure Commission Hardware Description Language Inter-American Development Bank International Atomic Energy Agency International Air Transport Association International Civil Aviation Organization Information and Communication Technology Institute of Electrical and Electronics Engineers Internet Engineering Task Force International Criminal Police Organization Internet Protocol Version 4 Internet Protocol Version 6 Information Sharing and Analysis Center International Organization for Standardization Internet Society Internet Service Provider Information Technology Information Technology Association of America International Telecommunications Union North American Treaty Organization
388 NCB NGO NIST OAS OECD OSI PMP RCIW ROM SCADA SMES TCP/IP TECS TIA U.K. UN UNCITRAL UNCTAD UNGA UNITAR
us.
USAID WAN0 WIPO WITSA WFS Y2K
National Central Bureau (Interpol) Non-Governmental Organization National Institute of Standards and Technology (U.S.) Organization of American States Organization for Economic Cooperation and Development Open Source Initiative Permanent Monitoring Panel Royal Canadian Mounted Police Read-only Memory Supervisory Control and Data Acquisition Small and Medium-Sized Enterprises Transmission Control Protocol/Internet Protocol Europol Computer System Total Information Awareness United Kingdom United Nations United Nations Committee on International Trade Law United Nations Conference on Trade and Development United Nations General Assembly United Nations Institute for Training and Research United States United States Agency for International Development World Association of Nuclear Operators World Intellectual Property Organization World Information Technology Services Alliance World Federation of Scientists Year 2000
389
Preface It is my pleasure to offer to the public, under the title Toward a Universal Order of Cyberspace: Managing Threats from Cybercrime to Cyberwar, the Report and Recommendations of the Permanent Monitoring Panel on Information Security. This work, part of an ongoing effort, has been undertaken in the framework of the International Seminars on Planetary Emergencies, a series of conferences organized since 1981, with broad international participation, by the World Federation of Scientists at the Ettore Majorana International Centre of Scientific Culture. The 2003 Plenary Session of the International Seminar on Planetary Emergencies has given its endorsement and full support to the document. The World Federation, founded in Erice (Sicily) in 1973, is a free association which has grown to include more than 10,000 scientists drawn from 110 countries. The Federation promotes international collaboration in science and technology between scientists and researchers. One of its principal aims is to mitigate planetary emergencies. A milestone was the holding of a series of International Seminars on Nuclear War, beginning in 1981, which have had a tremendous impact on reducing the danger of a planet-wide nuclear disaster, ultimately contributing to the end of the Cold War. In the course of its International Seminars on Planetary Emergencies, the World Federation of Scientists has identified the threats emanating from cyberspace as a major indicator of the fragility of modem, integrated societies and of undoubted relevance to the functioning and security of the world system. This Report offers a convincing analysis of the damaging potential of cyber attacks on almost all aspects of human endeavor. Its Recommendations make the case for urgent international action in the direction of a universal order of cyberspace for which, at this juncture, only rudimentary provision has been made. They offer an urgent challenge to international decision-makers, with a special emphasis on the responsibilities of the international scientific community. The World Federation of Scientists feels that it is now of primary importance to give this Report and Recommendations wide distribution, and to put it without delay before those representatives of the international community who are in particular called upon to make their contribution to the emergence of a universal order of cyberspace. In this spirit, I will transmit the document, on behalf of the World Federation of Scientists, to the United Nations, in particular to the Secretary General; the President of the General Assembly; the President of the Security Council; the President of the Economic and Social Council; the Presidents of the First, Second, and Sixth Main Committees of the General Assembly; the President of the ICT Task Force; the President of the Working Group on Informatics; as well as the President of the forthcoming World Summit on the Information Society to be held in Geneva in December 2003, and the Prime Minister of the Republic of Italy as the Head of the Government of the host country of the International Seminars and current President of the European Union. In so doing, I will strongly underline the need for all concerned to act swiftly and with determination. Professor Antonino Zichichi President, World Federation of Scientists Erice, August 2003
390
Introduction BACKGROUND In the framework of the seminars on Planetary Emergencies, the Information Security Permanent Monitoring Panel (PMP) was established in 2001, in order to examine the emerging threat to the functioning of information and communication technology (ICT) systems and to make appropriate recommendations.' A set of thirteen Recommendations set out in this paper were adopted by the Panel in August 2002 and endorsed by the World Federation of Scientists. In September 2002, prior to the inauguration of the 57" session of the UN General Assembly (UNGA), these Recommendations were submitted to the Secretary General of the UN, the President of the General Assembly, and the Presidents of the relevant Main Committees. In the opinion of the PMP, these Recommendations retain their validity, and the present Explanatory Comments are designed to provide them with new thrust and clarity. The Recommendations take on special significance in the light of the forthcoming World Summit on the Information Society to take place in Geneva (Switzerland) from 10 to 12 December 2003, pursuant to UNGA Resolution A/RES/56/183.This world gathering, which is to develop a common vision and understanding of the information society and to adopt an action program for its promotion, is currently being planned by a great number of open-ended inter-governmental preparatory committees that will define its agenda. Even before the conclusion of this preparatory process, it has become clear that confidence and security in ICTs will be among the major topics to be discussed and acted upon. Consequently, the dangers of cyberwar, cyberterrorism, and cybercrimeand thus the concerns reflected in this Report and its Recommendations-are likely to be at the core of the discussions. In this perspective, it is hoped that the Recommendations, and their Explanatory Comments, will be duly considered and found to be useful by the world meeting.
OVERVIEW The stability of modem society has been heightened by the ubiquitous nature of ICTs which pervades all aspects of human activity. Indeed, the utilization of ICTs is a recognized prerequisite to improved corporate competitiveness, government efficiency, human development, and the development of knowledge societies and economies. The Internet and capabilities of broadband networks have integrated business, government, and defense interests and empowered small and medium-sized enterprises (SMEs), enabling them to compete on a global basis. The benefits of ICTs, however, can be undercut by negative uses of these technologies in the form of cyber attacks, viruses and other malware, economic espionage, sabotage of data and systems, exploitation of networks, etc. Individuals and small groups can use ICTs against the interests of nation states. These cyber criminal acts can affect not only individual systems, but can also impact world peace and security and undermine development efforts. The resulting damage can ignite panic, cause a loss of confidence, create uncertainty, and destroy trust in modem society.*
391 The challenges presented by cybercrime are directly proportional to the size of the problem. Since cybercrime was first identified and its dangerous potential recognized, the problem has shown rapid growth such that it challenges all ICT users-whether individuals, small businesses, multinational corporations, public sector entities, or nation states-and imposes responsibilities for cyber security upon them. The availability of tools to exploit ICT systems has markedly increased, thereby lowering the skill level needed to launch such attacks. Consequently, the number of incidents has risen dramati~ally.~ The number of computer incidents reported to the Computer Emergency Response Team Coordinating Center (CERTICC) of Camegie Mellon Software Engineering Institute rose from six in 1988 when CERT/CC was formed to around 82,094 for 2002.4 Apart from the consequences for human development, there are three categories of harm flowing from cybercrime and attacks: economic consequences, disruption to critical infrastructures, and threats to national security and the capabilities of military and defense systems and first responders. The economic damage and disruption associated with these incidents, compared to traditional crimes, is alarming. For example, the U.S. Association of Certified Fraud Examiners reported that in 2000, that the average sum of money taken in a bank holdup was US$14,000, but the average computer theft was US$2 m i l l i ~ n . According ~ to the 2002 Computer Security InstituteFederal Bureau of Investigation annual survey, the financial losses associated with U.S. computer crime rose from US$20,048,000 in 1997 to US$170,827,000 in 2002. Total losses incurred for the 1997-2001 time period were US$1,459,755,245.6 Cyber attacks against critical infrastructures also pose a grave problem and threaten the global nature of cyberspace. Critical infrastructures are those systems that are vital to government operations, public safety, and national and economic security. The U.S. government considers the thirteen infrastructures as critical: agriculture, food, water, public health, emergency services, government, defense industrial base, information and telecommunications, energy, transportation, banking and finance, chemical industry, and postal and hip ping.^ The potential for cyber attacks against these infrastructures by other nation states and terrorists has alarmed governments around the globe. Because of the increasing dependency on ICTs, the vulnerability to cyber attacks against these infrastructures is steadily increasing. Since most of these infrastructures are owned and operated by the private sector, business’s responsibility for cyber security with respect to these networks is heightened. Combating cybercrime requires significant international cooperation and preventative measures, and this is especially important in deterring acts against critical infrastructure. Terrorists’ use of ICTs to communicate and conspire and the feasibility of their launching attacks through information infrastructure is real. In fall 2001, the Mountain View, California, police department requested FBI assistance in investigating suspicious surveillance of computer systems controlling utilities and government offices in the San Francisco Bay Area. The digital snooping was being done by Middle Eastern and South Asian browsers. The FBI found “multiple casings of sites” through telecommunications switches in Saudi Arabia, Indonesia, and Pakistan that focused on emergency telephone systems, electrical generation and transmission equipment, water storage and distribution systems, nuclear power plants, and gas facilities across the U.S. Some of the electronic
392 surveillance focused on the remote control of fire dispatch services and pipeline equipment. Subsequently, information about those devices, including details on how to program them, was found on A1 Qaeda computers seized this year. The U.S. government has expressed concern that terrorists are targeting the junctures between physical and virtual infrastructures, such as electrical substations handling hundreds of thousands of volts of power or panels controlling dam floodgates. According to a recent Washington Post report, one A1 Qaeda laptop found in Afghanistan had frequented a French website that contained a two-volume online “Sabotage Handbook” on tools of the trade, planning a hit, switch gear and instrumentation, antisurveillance methods, and advanced attack techniques. An A1 Qaeda computer seized in January 2002 in Afghanistan contained models of a dam, complete with structural architecture and engineering software that enabled the simulation of a catastrophic failure of dam controls. Other computers linked to A1 Qaeda visited Islamic chat rooms and had access to “cracking” tools to search networked computers and find and exploit security holes to gain entry or full command. Additionally, evidence obtained from browser logs indicate A1 Qaeda operatives spent time on sites that offer software and programming instructions for digital switches that run power, water, and transport and communications grids. A1 Qaeda prisoners have reportedly admitted to planning to use such tools. These systems are especially vulnerable because many of the distributed control systems @CS) and supervisory control and data acquisition (SCADA) systems that control critical infrastructure are connected to the Internet but lack even rudimentary security. In addition, the technical details regarding how to penetrate these systems are widely discussed in technical fora, and experts consider the security flaws to be widely known.* Since September 11, the US. Government has identified 192 groups, organizations, or individuals linked to terrori~m.~ Also, it is well known that civilians often take political actions against websites or business systems. In October 2000, the FBI issued an advisory warning that, due to high activity between Palestinian and Israeli sites, U.S. Government and private sector sites could become potential targets. Less than a month later, a group of hackers named Gforce Pakistan defaced more than 20 web sites and threatened to launch an Internet attack against AT&T.’’ Other direct acts of cyberterrorism include attacks by pro-Israeli and pro-Palestinian hackers on their opposing side’s web sites. Pro-Palestinian hackers attacked several Israeli government sites, including those of the Knesset (Parliament), Bank of Israel, the Prime Minister’s Office, and the Israeli Army.’’ The hackers also broke into several American-Israel Public Affairs Committee (AIPAC) databases, including one containing credit card numbers of members, then sent e-mails to 3,500 AIPAC members boasting of their intrusion.’* ICTs in the wrong hands present a new threat to world peace and national security through the offensive use of these technologies in the form of cyber warfare and cyber attacks. Nation states have developed more sophisticated capabilities to launch attacks against critical infrastructures and impair the national security of another state and its ability to defend itself. In a recent classified report, the US. Central Intelligence Agency reportedly expressed concern that the Chinese military may be examining methods to attack defense and civilian computer systems in the U.S. and Taiwan.13 One way of conceptualizing the problem is by viewing these e-attacks as information warfare. According to Russian experts:
393 At present, there is neither an established classification of cyber weapons, nor clear definition of this term. The key concept for defining the subject area of information security is one of “informational weapon^.'"^ The U.S. Department of Defense @OD) defines information warfare as “Information operations conducted during the time of crisis or conflict to achieve or promote specific objectives over a specific adversary or adversaries” and defines information operations as “Actions taken to affect adversary information and information systems while defending one’s own information and information system^."'^ Cyberterrorism has been defined by a leading U.S. expert in testimony before the U.S. Senate to be: T]he convergence of terrorism and cyberspace.. ..is generally understood to mean unlawful attacks against computers, networks, and the information stored therein when done to intimidate or coerce a government or its people in furtherance of political or social objectives. Further, to qualify as cyberterrorism, an attack should result in violence against persons or property, or at least cause enough harm to generate fear. Attacks that lead to death or bodily injury, explosions, plane crashes, water contamination, or severe economic loss would be examples.16 Cyberwar is a very real technique of war, and likely to be used more and more as time passes. The U.S., for example, has developed an “e-bomb” that utilizes highvelocity electromagnetic pulses that can permanently disable electrical and communication systems.I7 The cyberwar and cybercrime problem will continue to pose a serious threat that will require a coordinated response from industry, intelligence, military and defense, national security officials, and law enforcement. Even more disconcerting is the fact that there is not only the potential - but the likelihood - of a combination of attacks that will impair economic interests, critical infrastructures, and military and defense capabilities. According to a recently published UN report, “Cyber-crime and cyber-terrorism, and possibly cyber-war, will be an inevitable part of our future landscape.”I8 Jurgen Storbeck, Director of Europol, has described the Internet as “a new sphere of life and a new scene of There is an age-old and perpetual race between attack and defense, and information infrastructure will provide no exception. The legitimate interests of a state in countering cyber attacks and cybercrime, however, must be balanced against other international rights, such as those guaranteeing freedom of expression and human rights. Additionally, there is the concern that government regulation of and interference with Internet usage will impair the well-recognized ability of the Internet to foster democratization across the globe. The problems posed by cybercrime, cyber warfare, and cyberterrorism are of a universal and transnational character that touch upon all facets of the existence of states, society, business, and individuals. Information security underlies each of these challenges. The Recommendations and the Explanatory Comments that follow serve to support the PMP’s Recommendations and attempt to clarify the universality of these
394 issues and the need for all nation states to work together to arrive at common solutions and approaches to the wide array of issues that must be addressed. The Recommendations and Explanatory Comments are supported by a series of papers written under the individual responsibility by the members of the PMP. The collection of these papers is available at http://www.itis-ev.de/infosecur and contains the following contributions: “Consequence Management of Acts of Disruption,” by Jody R. Westby and t William A. Barletta + “Cyber Weapons as a New Means of Combat,” by Vitali Tsygichko + “Guidelines for National Criminal Codes on Cybercrime,” by Henning Wegener + “Heightening Public Awareness and Education on Information Security,” by Axel H.R. Lehmann + “International Information Security Negotiations,” by Andrey Krutskikh “International Monitoring Mechanisms for Critical Information Infrastructure t Protection,” by Olivia Bosch “New Forms of Confrontation-Cyber-Terrorism and C yber-Crime,” by t Ahmad Karnal + “New Security Challenges in the Information Age,” by Dmitry Chereshkin + “Public and Private Sector Responsibilities Regarding Information Security,” by Jody R. Westby and William A. Barletta “The Computer: Cyber Cop or Cyber Criminal?,” by Timothy L. Thomas t (with Karen Matthews). The PMP is conscious of the fact that its work is of a continuous nature, and that a number of issues have not yet been adequately probed, be it within the WFS or outside. Among these are: The delineation between the requirements of transparency versus privacy, as well as the need to balance civil liberties and privacy protection against security and law enforcement requirements. The development of adequate methodologies, on the basis of comparative analysis, for risk assessment in the ICT area. An analysis of the opportunities and challenges in the development of wireless systems and the improvement of the security of wireless technologies. A review of corporate governance with a view to improved digital risk management. Risk analysis and audit principles. Identification of new research areas for further examination; e.g., application level security, fault tolerant networks, self-healing networks, autonomous response, etc. Strategies and tactical warning; e.g., are these feasible? If so, what do they mean in terms of timeliness and response? Are there tools that could be developed to enable warning? The role of the scientific community in educating politicians, the public, and the corporate world to cyber threatshulnerabilities and their potential impacts on “life as we know it”.
395 Bridging the Digital Divide (with its various sub-problems of improving access to hardware, to software, to training, and to material on relevant issues, especially those of interest to developing countries, and the identification of low-cost solutions to this end), In its next work phase, the PMP intends to delve deeper into these issues, while also monitoring, on a systemic basis, the progress of implementation, as well as the remaining lacunae, of its current Recommendations. t
396
Recommendations Today, information security is an important priority for societies. Because of the global nature of cyberspace and the more active use of information and communication technologies (ICTs), this problem is of a universal and transnational character that touches upon all facets of the existence of states, society, and individuals. The vulnerability of global and national information infrastructures gives birth to new challenges to national and international security, business activity, and human rights. The problem of information security will not be resolved by the efforts of just one state or a group of states or on a regional basis. The solution of this problem demands a unified effort of the entire international community. In light of the foregoing, the Panel accepted the following Recommendations: 1. Because of its universal character, the United Nations system should have the leading role in inter-governmental activities for the functioning and protection of cyberspace so that it is not abused or exploited by criminals, terrorists, and states for aggressive purposes. In particular it should: (a) respond to an essential and urgent need for a comprehensive consensus Law of Cyberspace; (b) advance the harmonization of national cybercrime laws through model prescription; and (c) establish procedures for international cooperation and mutual assistance. 2. Working to this end, the UN should give recognition to the work already accomplished by the negotiating parties to the Council of Europe Convention on Cybercrime (CoE Convention). The CoE Convention would draw greater strength if all parties who participated in its negotiation process were to sign the Convention if they have not already done so, and those who have were to accelerate the ratification and transformation processes. Immediately subsequent to the entry into force of the Convention, signatories should take steps to nominate and notify their Authority for the handling of mutual assistance, to participate in the 24/7 network, and to take other steps to promote international cooperation in the defeat of cybercrime as the CoE Convention foresees. 3. Cybercrime, cyberterrorism, and cyber warfare activities that may constitute a breach of international peace and security should be dealt with by the competent organs of the UN system under international law. We recommend that the UN and the international scientific community examine scenarios and criteria and international legal sanctions that may apply. 4. Within the UN framework, we recommend that a special forum undertake the synthesizing of work on cyberspace undertaken within the UN system. 5. In this context, we recommend the UN and other international entities examine the feasibility of establishing an international Information Technology Agency with the indicative mandate to, inter aha:
> > >
Facilitate technology exchanges; Review and endorse emerging protocols and codes of conduct; Maintain standards and protocols for ultra-high bandwidth technologies;
397 Specify the conditions on which access to such ultra-high bandwidth technologies be granted; Promote the establishment of effective inter-governmental structures and public-private interaction; Attempt to coordinate international standards setting bodies with the view of promoting interoperability of information security management processes and technologies; Facilitate the establishment and coordination of international computer emergency response facilities, including taking into account activities of existing organizations; Share cyber-tracking information derived from open sources and share technologies to enhance the security of databases and data sharing. 6. Nationally and transnationally, an educational framework for promoting the awareness of the risks looming in cyberspace should be developed for the public. Specifically, schools and educational institutions should incorporate codes of conduct for ICT activities into their curricula. Civil society, including the private sector, should be involved in this educational process. 7. Due diligence and accountability should be required of chief executive officers and public and private owners to institutionalize security management processes, assess their risks, and protect their information infrastructure assets, data, and personnel. The potential of market forces should be fully utilized to encourage private sector companies to protect their information networks, systems, and data. This process could include information security statements in filings for publicly traded companies, minimum insurance requirements for coverage of cyber incidents, and return on investment analyses. 8. In parallel, to the elaboration and harmonization of national criminal codes, there should also be an effort to work toward equivalent civil responsibility laws worldwide. Civil responsibility should also be established for neglect, violation of fiduciary duties, inadequate risk assessment, and harm caused by cyber criminal and cyber terrorist activities. 9. mong the specific and concrete actions that should be considered is the possibility that commercial off-the-shelf (COTS) hardware, firmware, and software should be open source or at least be certified. 10. Information security issues should also be addressed in forthcoming multilateral meetings. Regional organizations should also add to national and international efforts to combat attacks in cyberspace in their respective regional contexts. 11. International law enforcement organizations should assume a stronger role in the international promotion of cybercrime issues. The competences and functions of Interpol and, in the European context, Europol, should be substantially strengthened, including by examining their investigative options. 12. The international science community should more vigorously address the scientific and technological issues that intersect with the legal and policy aspects of information security, including the use of ICTs and their impact on privacy and individual rights.
398 13. The international scientific community, and in particular the World Federation of Scientists, should assist developing countries and donor organizations to understand better how ICTs can further development in an environment that promotes information security and bridges the Digital Divide.
Explanatory Comments to Recommendations 1.
Because of its universal character, the United Nations system should have the leading role in inter-governmental activities for the functioning and protection of cyberspace so that it is not abused or exploited by criminals, terrorists, and states for aggressive purposes. In particular it should: (a) respond to an essential and urgent need for a comprehensive consensus Law of Cyberspace; (b) advance the harmonization of national cybercrime laws through model prescription; and (c) establish procedures for international cooperation and mutual assistance. A.
Why Should the UN Have the Leading Role in Intergovernmental Activities on Cvberspace?
The interconnected global network of 600 million online users” served by 15 million hosts” connecting nearly 200 countries presents increasingly daunting security challenges to governments, companies, and citizens. Although the Internet has brought enormous economic and social benefits, it has also ushered in a host of new problems. Negative repercussions” of the Internet boom - while not outweighing the benefits include: Computer related fraud, forgery, and theft Violations of intellectual property rights Cyber-mediated physical attacks Sabotage of data Network attacks such as distributed denial of service attacks (DDoS) Malicious code (viruses, worms, and Trojan horses) Web defacements, including politically motivated hacking (hactivi~m)’~ Unauthorized interceptions of communications, intrusion, and espionage Identity theft Spoofing of IP addresses, password cracking, and theft Online sexual exploitation of children and child pornography Computer harassment and cyber-stalking. The motivation to commit cybercrime is also increasing exponentially. Ever increasing connectivity among Internet users around the globe compounds the risks because there will be more sophisticated communications infrastructure and an increased pool of bad actors and terrorists who can use technology to conspire and to commit widespread vandalism, fraud, economic espionage, and to launch attacks on networks and information systems.”
399 The already pervasive and expanding nature of the Internet and ICTs requires a universal approach to the security of data, systems, and networks. According to a recent UN report on information security: The wide and pervasive integration of computers and embedded chips into modem society is what makes it vulnerable to cyber-attacks. Computers are now deeply integrated into the management and processing of our daily actions, and embedded chips are so omnipresent today that it is virtually impossible to determine even their actual numbers and locations. This became abundantly clear during the Y2K exercise, when businesses and governments spent billions to make sure computer systems would work when the year 2000 began.25 The profound integration of computers and information technology is obviously the strength of modem life, but it is also its vulnerability. The greater the interconnectedness, reliability, and complexity, the greater the vulnerability and the ease for exploitation. Information and communications systems are not only a potential target of criminals, terrorists, and military planners; they are also portals of physical vulnerability for the vast number of physical assets with controls linked to the Internet or managed by information technology systems. These direct and indirect vulnerabilities are amplified by the relatively small number of nodal exchange points (roughly 100 or so) on the Internet network, the existence and location of which is public knowledge.26 Because the ubiquitous nature of the Internet and the built-in vulnerabilities of the global network require a global perspective, the UN is ideally suited to accept a role within its capabilities to lead inter-governmental activities regarding the security of cyberspace. Similarly, only a global consensus can address the updating of the laws of war to include the parameters of wars in c y b e r ~ p a c e .No ~ ~ multinational organization other than the UN has the membership and capability to address these issues in a meaningful way that will have global impact. Beyond security concerns, the utilization of ICTs in investigatory, tracking, and recording practices and control over communications and Internet usage poses a serious threat to international rights guaranteed under the international law of the UN, such as human rights, freedom of expression and other civil liberties. According to a senior UN official: As the only truly universal international organisation that we have today, the United Nations can provide the broadest and most neutral and legitimate platform for bringing together governments and other key stakeholders to undertake this effort. Only this institution can provide the forum for discussion and debate on the complexities of the subject, and coalesce the expertise that exists around the world for a proper drafting of relevant legislation that can fill the existing and growing void in cyber-law.28
B.
Why a Law of Cyberspace?
At the outset, one must acknowledge that the call for a body of law regulating cyberspace is not uniformly accepted in the legal community. The usual arguments are that (1) there is no consensus concerning the many possible designs or architectures that
400 may affect the functionality we now associate with cyberspace; (2) very few bodies of law are defined by their characteristic technologies; and (3) the best legal doctrine reexamines, expands, or applies existing doctrines to a new arena. Whatever the validity of such comments concerning activities within single nation states, the capability of the Internet to cut across many national jurisdictions at lightning speed argues that we look anew.29 It recommends that nations seek a comprehensive re-examination of the many relevant, sometimes conflicting legal doctrines, practices, and procedures to produce a comprehensive, universal, and uniform legal framework for handling the issues colloquially called cyber law. The Privacy & Computer Crime Committee, Section of Science & Technology Law of the American Bar Association, has recognized the need for international action to create a uniform body of law: A major component of information and infrastructure security is a nation's ability to deter, detect, investigate, and prosecute cyber criminal activities. Industrialized nations and multinational organizations have taken significant steps toward combating cybercrime. The glaring gaps in work to-date are (1) inadequate international coordination and (2) woefully deficient legal frameworks and organizational capacity in developing countries necessary to combat ~ybercrime.~' An initial framework that could serve as an excellent starting point for the development of a Model Law on Cybercrime has been developed in the Council of Europe. The CoE Convention on Cybercrime of 2001 (CoE Convention) has been signed by 36 c ~ u n t r i e s . ~Although ' civil libertarians and privacy advocates continue to express concern that the CoE Convention undermines individual privacy and is inconsistent with provisions in U.S. law, it has been endorsed by the Group of Eight (G8) as a model to be followed by other countries.32 Other important work in this area has been done by the G8, the Organization for Economic Cooperation and Development (OECD), the Asia-Pacific Economic Cooperation (APEC), the European Union (EU), and the Organization of American States (OAS). Furthermore, with the public revelation of President Bush's National Security Presidential Directive 16 ordering the U.S. government to develop cyber warfare guidelines and rules under which the U S . could penetrate andor disrupt foreign computer systems,33 cyber warfare has come out of the closet. As with other forms of warfare, there should be internationally accepted limitations on the form of conflict. Certainly, a meaningful codification of such activities should take place under the aegis of the international body with the widest membership, the United Nations. The PMP concludes that, on a global basis, current national and international legal frameworks are insufficient and inconsistent across national jurisdictions to address the scope and complexity of the subject of cybercrime, cyberterrorism and cyber warfare. While efforts to combat cybercrime and cyberterrorism have been valiant and even successful in many areas, more is possible. We recommend a determined effort be made to draw upon the work performed to date in order to draft and adopt a comprehensive Model Law on Cybercrime and agreement on related procedural, administrative and cooperative considerations. The UN has already performed excellent work in the development of model laws for electronic transactions and electronic signatures34 and its institutional roots are based on established international rules for conflict. Such a Model Law would have to address numerous
401
issues, ranging from technical and definitional (e.g., what is cyberspace) to substantive (e.g., legal provisions, jurisdictional issues, and standards of evidence) to procedural and administrative (e.g., international cooperation mechanisms). It would also have to balance competing interests of sovereignty, national security, civil liberties, human rights, and freedom of expression. The UN should give separate consideration to determining the rules under which nation states may engage in cyber warfare and respond to cyberterrorism. The World Summit on the Information Society may also be a forum for discussion on this subject. C.
How Comprehensive a Consensus is Needed?
Some argue that the CoE Convention on Cybercrime is adequate consensus for an international legal framework to be developed. A legitimate counterpoint, however, is that more countries would have to sign and ratify the Convention and abide by its terms in order for it to effectively deter cybercrime, significantly advance international cooperation on these issues, or lead to a harmonized global framework. Out of about 200 countries, only 36 have signed the CoE Convention. Many of the countries who have not signed the CoE Convention either do not have any cybercrime laws, or have such inadequate ones, that criminals can essentially act with impunity. Since communications utilizing packet switched technologies often travel through many countries before reaching their destination (even on local-to-local communications), the CoE Convention does not provide a comprehensive enough consensus in this area. However, despite some shortcomings, controversial points, and lacunae, the CoE Convention “no doubt constitutes a major drafting achievement by a representative cross section of the international community, and there is no private or public initiative in sight that could match it in legal status, completeness, quality and endorsement received.”35 This Convention deserves to be considered as a starting point for working toward a broader, universal agreement and Model Law.
D.
What are Some Areas of Conflictilnconsistencv?
Multiple cases have arisen where Internet activities considered to be legitimate in one country violate the laws in another.36 Additionally, one country may not have the procedural laws to enable it to perform the requested assistance or law enforcement may not have the expertise to assist in the search and seizure of electronic evidence.37 Examples of areas of conflict include jurisdictional issues, extradition disputes, extratemtorial seizures, violations of content laws, and inconsistent hacking laws. These inconsistencies alone underscore the important role the UN could play in acting as coordinator on these issues. Gelbstein and Kamal note that: Civil liberties groups have also expressed concern that the [CoE] convention undermines individual rights to privacy and extends the surveillance powers of the signatory governments. Critics in the United States indicate that the provisions of the convention are incompatible with current U S . law.38
402 For example, by defining the sending of unsolicited e-mails as a criminal activity, the Convention is claimed “to criminalize behavior which until now has been seen as lawful civil di~obedience.”~~
E.
How Might Harmonization of Cvbercrime Laws Proceed Through Model Prescription?
The UN Model Laws on Electronic Commerce and Electronic Signatures are considered to be the the global “standard” for legislation in these areas. They have been looked to and followed by industrialized and developing countries around the globe. UN action that would provide a global model law and an accompanying explanatory memoranda that nation states could use as a guide, along with an international agreement on procedural, administrative, and cooperative aspects, would make the global harmonization of cybercrime laws an achievable goal.
F.
What are Examples of Procedures for International Cooperation and Mutual Assistance?
Certainly, one of the oldest and best known institutions for international cooperation and mutual assistance is Interpol. Founded in 1923, it has 178 member countries and maintains close working relationships with numerous intergovernmental bodies. The G8, Europol, OECD, UN, APEC, and OAS have all established mechanisms or launched initiatives to promote international cooperation and mutual assistance in the cyberspace arena.@ One of the best known practical examples of global-scale coordinated international cooperation and mutual assistance was seen in efforts to deal with the Y2K problem. The Year 2000 (Y2K) experience gave rise to new ways in which governments and critical infrastructure sectors world-wide shared information to monitor incidents as they arose.. ...The international governmental and industry organisations notable for establishing mechanisms for global monitoring of Y2K incidents affecting critical infrastructure sectors included the International Civil Aviation Organization (ICAO) and the International Air Transport Association (IATA), and the International Atomic Energy Agency (IAEA) and the World Association of Nuclear Operators (WANO).~~ At the technical levels, there are numerous opportunities for information sharing4’ both in the public and private sectors. Information sharing can be facilitated by public sector initiatives that (a) establish centers for sharing information on an anonymous basis or serve as an intermediary where the direct sharing of information among industry is difficult, (b) create a central alert point for technical information and assistance regarding security risks and fixes, and (c) organize a public/private group comprised of all stakeholders (industry, government, academia, NGOs) to begin a dialogue on ICT security risks and develop ways to work together.43
403 In 1997, information sharing and analysis centers (ISACs) were established in the U.S. to facilitate information exchange among critical infrastructure sectors. ISAC members usually “share information in a way that preserves their anonymity while providing an overview of cyber incidents within their sector not otherwise obtained individually”.” Indeed, the Commission of the European Communities notes that “urgent measures are needed to produce a statistical tool for use by all Member States so that computer related crime within the European Union can be measured both quantitatively and q~alitatively”.~~ This is important; however, there also needs to be a common methodological way to look at cybercrime, lest the quantity and quality results be slanted. Information sharing efforts, however, are hindered by national laws that deter the private sector from sharing security incident information with public sector entities. Laws such as the U.S. Freedom of Information Act and other similar national “access to information laws” cause concern within the private sector that shared confidential or proprietary information may be disclosed. Antitrust laws also deter collaborative information sharing activities. Additional concerns are raised by the sharing of security incident information with foreign governments. U.S. Sentencing Guidelines create an additional risk. Corporations worry that by sharing security breach information and seeking the assistance of law enforcement, an investigation could reveal wrongdoing by corporate insiders which could “snap back” on the company and expose it to harsh penalties under the Guidelines. Thus, there is a need to develop a consistent international framework that encourages public-private information sharing by mitigating the risks that flow from these existing laws.
2.
Working to this end, the UN should give recognition to the work already accomplished by the negotiating parties to the Council of Europe Convention on Cybercrime (CoE Convention). The CoE Convention would draw greater strength if all parties who participated in its negotiation process were to sign the Convention if they have not already done so, and those who have were to accelerate the ratification and transformation processes. Immediately subsequent to the entry into force of the Convention, signatories should take steps to nominate and notify their Authority for the handling of mutual assistance, to participate in the 2.417 network, and to take other steps to promote international cooperation in the defeat of cybercrime as the CoE Convention foresees.
Cybercrime defies national boundaries. Any effective strategy to prevent and combat the new types of cyber offenses and the new modalities of committing traditional offenses through technologies of cyberspace must, therefore, lead to transnational responses in criminal law and law enforcement. There must be no national loopholes; the present situation in which there are considerable differences of legal coverage, standards, and levels of protection is highly unsatisfactory. The case for a binding, universal international code of broad scope is ~ o m p e l l i n g . ~ ~ At the same time, shared prescriptions of this nature will be unsuitable for containing and penalizing all cyber attacks. Attacks by nation states and international terrorist groups on critical societal and economic infrastructures and the defense
404
establishment of other countries, giving rise to highly relevant threat scenarios, require different international responses, as discussed under Recommendation 3. A number of private fora and international organizations have attempted to address the substantive, procedural, and jurisdictional challenges posed by the transnational nature of cybercrime. The most extensive is the Council of Europe’s Convention on Cybercrime (CoE Convention), which was opened for signature on Nov. 23, 2001 and has, up to now, been signed by 36 countries, of which four signatories (U.S., Canada, Japan, and South Africa) are “partner” countries but are not CoE members. The Convention covers substantive penal law as well as criminal procedural law and international cooperation in law enforcement, underlining the essential linkage between the three; indeed, the time-critical nature of tracking cybercrime, securing electronic evidence, and facilitating pursuit requires such linkage. All attempts at creating a consistent and universal penal framework for dealing with the cyber challenge have to face a number of inherent problems: (1) striking a balance between the privacy of communications in cyberspace and the freedom of expression and access to information on the one hand, and the requirements of national security and speedy law enforcement on the other; (2) the retarding influence that will be exercised by the need to ratify a treaty containing civil and criminal provisions and administrative and procedural requirements; (3) the need to transform treaty obligations into applicable law; (4)the need to ensure essential equivalence of these laws in the face of very general directive language in the international texts; (5) the time requirements for setting up functioning transnational cooperation mechanisms; or (6) the complex problem of including content-related cyber offenses. These are discussed in the accompanying papers.47 These difficulties notwithstanding, the CoE Convention offers great promise for moving towards a universal penal system in this field. Given the present composition of affiliated member states, it avoids the pitfall of offering a purely European focus and lends itself to a broader international audience. The ultimate objective would be to incorporate, textually, its provisions into a future Model Law on Cyberspace which is the central issue around which these Recommendations revolve. In order to enhance the credibility and effectiveness of the CoE Convention, Recommendation 2 appeals, as a first and important step, to the parties that participated in the negotiation process to ratify and implement the Convention and to establish the necessary cooperation mechanisms for the broad geographical area which they represent. Further steps to extend the number of signatory nation states to the CoE Convention would be welcome. Indeed, it would be highly desirable that a campaign to promote universal adherence get underway, at short notice, at the level of the United Nations, in the preparatory phase for the creation of a universal regulation of cyberspace. It would be important that response times for such an international appeal be kept as short as feasible, and that each signatory, in launching the process for transforming treaty obligations into national law, be mindful of the time-critical nature of defeating cybercrime and keeping pace with technology. If the CoE Convention can manage to create a critical momentum for the establishment of a universal legal framework and administrative organization regarding cyberspace, this momentum must not be lost.
405
In assessing the importance of the CoE Convention, governments should also be aware of an important complementary effort by the European Union. The EU Ministers of Justice adopted the Proposal for a Council Framework Decision on attacks against information systems on March 4, 2003. Consequently, they will now begin harmonizing their own national laws with this Decision.48 The Council Framework Decision contains definitions, model articles for the criminalization of major cyber attacks, and rules for cooperation among EU countries, some of which flesh out in more detail provisions from the CoE text, some more concise, but overall, in the Framework’s own professed intention, compatible with the CoE Convention. The particular level of legal and administrative cooperation that already exists among the Member States of the EU as a common legal and judicial space, but is lacking elsewhere, means that the Framework is not suitable as a model code to the same extent as the CoE Convention. The latter preserves its quality as the overriding and most complete legal instrument particularly suited for endorsement by the present Recommendation. 3.
Cybercrime, cyberterrorism, and cyber warfare activities that may constitute a breach of international peace and security should be dealt with by the competent organs of the UN system under international law. We recommend that the UN and the international scientific community examine scenarios and criteria and international legal sanctions that may apply.
Cyber activities that constitute deliberate hostile actions by nation states or nonstate actors operating transnationally may threaten international peace and security, yet elude penal sanctions under current legal frameworks or a future Model Law on Cyberspace. One consideration is that, under certain circumstances, the international doctrine of sovereign immunity protects nation states against legal actions. This protection could conceivably extend to offensive cyber actions taken by nation states. Other concerns relate to (1) the lack of international cooperation on a global scale, and ( 2 ) technical considerations regarding the inability to effectively track and trace Internet communications. The response to any scenario -- whether a cyber criminal activity, an act of cyberterrorism, or an intended act of cyber warfare by a nation state - requires the ability to effectively track and trace cyber attacks. A recent report from CERT/CC at Camegie Mellon University notes: The capability of a nation (or a cooperating group of nations) to track and trace the source of any attacks on its infrastructures or its citizens is central to the deterrence of such attacks and hence to a nation’s long-term survival and prosperity. An acknowledged ability to track and trace both domestic and international attackers can preempt future attacks through fear of reprisals such as criminal prosecution, military action, economic sanctions, and civil lawsuits.. .. The anonymity enjoyed by today’s cyber-attackers poses a grave threat to the global information society, the progress of an information-based international economy, and the advancement of global collaboration and cooperation in all areas of human endeavor.49
406
Technical difficulties must be addressed by international standards setting bodies. The TCP/IP protocol,50 which is the current standard protocol for network communications, seriously limits the ability to track and trace cyber attacks.51 At present, “the Internet has no standard provisions for tracking or tracing the behavior of its users Because the Internet protocols were designed for a trustworthy community of researchers, it is quite easy for users to hide their tracks, making it difficult to trace the communications path. For example, because there typically is no capability for cryptographic authentication of the information in I P packets, the information in the packet can be modified and the source address can be forged. “Packet laundering” involves compromising intermediate hosts along a communication path and ho ing from host to host such that traceback attempts can be effectively thwarted!’ These vulnerabilities could facilitate, or disguise, state-sponsored cyber activities or intentionally redirect a cyber criminal act to make it appear that it came from a nation state. As noted by CERTKC’s Howard Lipson: It is clear that tracking and tracing attackers across a borderless cyber-world, and holding them accountable, requires multilateral actions that transcend jurisdictions and national boundaries. Tracking and tracing requires cooperation encompassing the legal, political, technical, and economic realms.. .. One of the most significant policy implications of the technical approaches to tracking and tracing ... is the need for intense international cooperation at a deeply technical level. This cooperation must o well beyond simple agreements in principle to share tracking data.5 f Present legal regimes are ineffective in deterring highly relevant threat scenarios that may violate international peace and security. Actions that are prohibited by nation states or considered terrorist or rogue acts against other countries require further deliberation by the United Nations. Internationally agreed standards of conduct are necessary if the Internet is to remain a backbone of economies and a primary means of global communication. In a thorough analysis of the uncharted waters in the area of cyberspace attacks, three renowned scholars in the field argue that: In particular, the status of information operations as “force” or “armed attack” is undetemined, an uncertainty which complicates diplomatic and military decision-making. In terms of the UN Charter, it is clear that a range of information attacks would constitute uses of force, and a comparable range of countermeasures would constitute legitimate self-defence.. .. Beyond these preliminary conclusions, there is far more work to be done on both international technical and legal fronts. Nations that choose to employ information operations, or that expect to be targeted by them, should facilitate tracking, attribution and transnational enforcement through multilateral treaties and, more broadly, by clarifying international customary law regarding the use of force and self-defence in the context of the UN Charter and the laws of armed conflict.55
407 Several scenarios support this conclusion and range from “cyber activists” to information and cyber warfare. On the less serious end of the spectrum, there is the April 1998 distributed denial of service attack launched against the U.S. Department of Defense by “cyber activists” who caused some Department computers to crash.56 At the other end of the spectrum are direct attacks against the critical infrastructures of one nation state by another. One of the first examples of this was seen in 1991 in Operation Desert Storm when the U.S. disabled Iraq’s communications network. Other examples of cyber warfare could include: “Means for highly accurate spotting of electromagnetic equipment and its destruction by way of rapid identification of separate components of control, recognition, guidance and fire information systems. Means for hitting components of electronic equipment and power supply thereof with a view to putting individual components of electronic systems out of action for short-term or irreversibly. Means for affecting data transmission processes with a view to terminating or disorganizing operations of data exchange subsystems, by affecting signal propagation environments and functioning algorithms. Propaganda and disinformation facilities for modifying control system data, creating a virtual picture of the situation different from the real one, changing human value systems, damaging morale of the adversary’s population.”*’ Packet inspection and modification or rerouting through platform technologies at country gateways.58 In between, lie the acts of terrorists or rogue actors that can be equally destructive, as noted in the Introduction to this Rep0rt.5~ Increasingly, nation states, either individually or collectively, are acting to protect their own networks. The range of actions that are possible is considerable, and some can have broad impact on the global network and communications capabilities. It is becoming increasingly clear that companies and countries alike must shift from the reactive mode to the active mode in dealing with cyber attacks. As noted by two World Federation of Scientists experts, “governments (and companies) need the ability to block distributed denial of service attacks, viruses and malicious worms, and protect supercritical and critical infrastructure at the core network level before they inflict their damage along backbone and customer links.”60 An international discussion and understanding regarding what types of proactive actions are acceptable or allowable is necessary to ensure one nation’s protective actions do not unduly hinder the communications capabilities of other nations. The international legal framework is especially murky in the area of cyber attacks and information warfare. The UN Charter was not drafted with the information age in mind and definitions lack clear meaning in the cyber context. The Charter, for example, forbids “acts of aggression” and limits the “threat or use of force” in peacetime. Article 41 grants the Security Council the power to enforce these Charter restrictions through the “complete or partial interruption of economic relations and of rail, sea, air, postal, telegraphic, radio, and other means of communication, and the severance of diplomatic relations.” Article 42 allows for action by “air, sea or land forces” as necessary to maintain or restore peace. According to one analysis, “Factors that may influence
408 whether something is an act of force include expected lethality, destructiveness and inva~iveness’~.~’ Thus, Article 41 may be interpreted as allowing some interruption of communications, if it is not done in a manner that is not lethal, destructive or invasive, but what does that mean in the cyber sense? Certainly, some acts against communication systems could be considered quite destructive and/or invasive, such as the manipulation of dam controls or power grids.62 One of the preeminent authors in this area, Walter Sharp, argues that mani ulations or attacks that cause an economic crisis could be deemed a “use of force”! And while one action, such as packet sniffing, rerouting, or content modification, may not be lethal or destructive, a reasonable argument can be made that it would be invasive. Responses to attacks on information systems could conceivably be allowed under Article 51 of the UN Charter, which allows states to take actions in self-defense but requires them to report such actions.64 Individual responses by states could be either overt or covert, making the reporting requirement problematic in instances of covert actions. Indeed, what types of responses might be acceptable under Article 51 is vague. Moreover, nations could engage in individual or collective cyber self-defense through NATO or other multinational alliances.65 The laws of armed conflict must also be factored into any discussion regarding cyber activities of nation states. In times of war, civilian assets that support the military (such as communication systems) may be attacked in order to obtain submission of the enemy, provided that it is limited to military objectives and civilian losses are proportional to the military advantage to be gained - and provided it avoids unnecessary suffering. Possible pre-emptive actions must be also be considered and under what circumstances these might be allowed.66 Elaborating upon this nutshell-identification of problems, Andrey Krutskih, reflecting a general line of thinking among Russian experts, has made a number of suggestions for further international law work that would aim at including cyberattacks more broadly into extant international law. They can be summarized as follows: In line with the concept67 of defining techniques of interfering with information security as “information weapons”, despite the present uncertainty on their scope, it is suggested that new, extended criteria for the definition of weapons and armed aggression should be sought, giving emphasis to the objectives of the “aggressor”, such as seeking military superiority.68 Cyber attacks on other states could then be considered acts of armed aggression under the UN Charter, and, applying the principles of proportionality and necessity, thresholds for responsive actions in self-defense could be defined, taking into account the direct as well as the indirect damage cyber attacks can cause. Further along these lines, the author proposes to establish a list of key information systems of critical relevance for national security which, as a “zone protected by international law,” would benefit of protective mechanisms, such as legitimate international emergency responses, beyond the normal rules and practices on reprisals and responses. In the list, a distinction should be made between civilian and transnational facilities, and military systems which may be subject to legitimate attacks.
409
On the argument that “cyber weapons” are not currently subject to international treaties pertaining to arms control, Dr. Krutskih advances several suggestions on a negotiated adaptation of extant treaty law designed to curb the proliferation of such weapons and providing a clear legal framework relating to the aggressive use of cyber operation^.^' In an even broader sweep, Dr. Krutskih, following from earlier official projects within the UN and bilateral diplomacy, develops the idea of a comprehensive international legal regime banning the development, production and use of the “most hazardous types of c ber weapon^"^^ for which the key ideas are spelled out in catalogue form? Part of this broad approach is the establishment of an “early warning system”. The author also advocates a sanctuary concept under which “global information systems” would be defined and protected as demilitarized zones.72 Clearly, the types of cyber activities nation states may engage in, either defensively or offensively, deserve deeper discussion in a multinational forum. The PMP supports the following conclusion: As electronic information networks expand, and military and industrial infrastructures become more dependent on them, cyberattacks are bound to increase in frequency and magnitude. Interpretations of the UN Charter and of the laws of armed conflict will have to evolve accordingly in order to accommodate the novel definitions of the use of force that such attacks imply.. .. In terms of the laws of armed conflict, the potentially dangerous consequences of an unnecessary response, a disproportional response or a mistakenly targeted response argue for keeping a human being in the decision loop. Beyond these preliminary conclusions, there is far more work to be done on both the international, technical and legal fronts. Nations that choose to employ information operations, or that expect to be targeted by them, should facilitate tracking, attribution, and transnational enforcement through multilateral treaties and, more broadly, by clarifying international customary law regarding the use of force and self-defence in the context of the UN Charter and the laws of armed c0nflict.7~ Operationally, scientific studies and scenario generation exercises should be undertaken in the international legal and technical communities, involving the General Assembly and First and Sixth Committees. The International Law Commission could be tasked with developing an appropriate legal framework defining legitimate cyber actions by nation states. 4.
Within the UN framework, we recommend that a special forum undertake the synthesizing of work on cyberspace undertaken within the UN system.
Ordering cyberspace under the perspective of universality requires comprehensive involvement by the United Nations. In many ways, this challenge has already been recognized and is increasingly met by various UN offices and bodies as well as by
410
members of the wider UN family. There are also global initiatives undertaken by the private sector that purport to work towards similar ends and could usefully be included in an over-all effort. These manifold, widely dispersed efforts are, however, difficult to follow and to assess in their overall impact. A central focal point within the UN itself could perform a coordinating, evaluating, and synthesizing function. Without prejudice to the mandate or autonomous policy decisions of other UN branches or outside organizations, such a forum could catalogue and assess the work done elsewhere, point to inconsistencies and duplication, identify gaps and new research requirements, and stimulate coordinated approaches. The problem is far wider than just a question of the Digital Divide. The list of UN or UN-related actors in the field is already long. Apart from a number of resolutions adopted by the General Assembly, the UN ICT Task Force, UN Institute for Training and Research (UNITAR), the UN Center for Social Development and Humanitarian Affairs, the UN Committee on International Trade Law (UNCITRAL), the UN Conference on Trade and Development (UNCTAD), and the UN Office for Drug Control and Crime Prevention have provided inputs in their particular field of action. Other UN entities such as the World Intellectual Property Organization (WIPO), the International Telecommunications Union (ITU),and the International Atomic Energy Agency (IAEA) have made contributions, as have the International Organization for Standardization (ISO), the International Civil Aviation Organization (ICAO), the International Air Transport Association (IATA), and others. From the private sector, activities with a global perspective are undertaken, among others, by the International Chamber of Commerce (ICC), the Global Business Dialogue on Electric Commerce (GBDe), the World Information Technology and Services Alliance (WITSA), the Global Internet Project, the Global Information Infrastructure Commission (GIIC), and the Information Technology Association of America (ITAA). The special UN forum recommended here should, of course, also take cognizance of the ongoing work undertaken by the OECD (especially its recently updated Guidelines for the Security of Information Systems and Networks), the G8, the European Community, and the Council of Europe. Given the broad scope of cyberspace related problems, the forum would be best established as a special entity within the UN Secretariat or as a body reporting to the UN General Assembly. Mechanisms should be developed to incorporate all stakeholders in the work of such a body.
5.
In this context, we recommend the UN and other international entities examine the feasibility of establishing an international Information Technology Agency with the indicative mandate to, inter alia:
P P P
>
Facilitate technology exchanges; Review and endorse emerging protocols and codes of conduct; Maintain standards and protocols for ultra-high bandwidth technologies; Specify the conditions on which access to such ultra-high bandwidth technologies be granted;
41 1
Promote the establishment of effective inter-governmental structures and public-private interaction; Attempt to coordinate international standards setting bodies with the view of promoting interoperability of information security management processes and technologies; Facilitate the establishment and coordination of international computer emergency response facilities, including taking into account activities of existing organizations; Share cyber-tracking information derived from open sources and share technologies to enhance the security of databases and data sharing. The above list of possible attributions for the intended Agency appears to be selfexplanatory and sufficient to set in motion the process of examining its feasibility. The Agency is perhaps best established within the UN system, but an institutional format on the basis of public-private partnership is not to be excluded. The PMP is mindful of current UN budget constraints and the general reluctance of governments to embark on new institutional solutions. However, given the amount of work already performed in various bodies, UN and others, in the IT field, the organization chart of the Agency could be small, and some reshuffling of personnel might be possible. The point is to create a central entity that can serve as a clearinghouse and coordination center for the various initiatives and work already undertaken or developed in this area. The initiative for a feasibility study might usefully be taken by the UN Secretary General. 6.
Nationally and transnationally, an educational framework for promoting the awareness of the risks looming in cyberspace should be developed for the public. Specifically, schools and educational institutions should incorporate codes of conduct for ICT activities into their curricula. Civil society, including the private sector, should be involved in this educational process.
Rapid innovations of ICTs and the development of a wide variety of ICT products and applications has resulted in a permanently increasing and heterogeneous ICT-user community of all ages, skills, and intellectual and cultural backgrounds. ICT products are becoming more and more pervasive and ubiquitous resources of our life. More or less, all individuals use ICT products as part of their private, professional, and public life. ICTs are becoming such a part of everyday life, we are becoming as accustomed to using them as we are with other natural or technical resources. With respect to this situation, all individuals have to become aware of not only the advantages of ICT applications, but also of their consequences and - sometimes hidden risks, especially concerning safety and security. Making people aware of the risks associated with ICTs requires, at first, the development of an educational framework, and of easily accessible information systems and sources, which provide individuals with information and knowledge about data and information security risks according to their individual background, skills, and needs: + All individuals should at least have a basic understanding of the key information security properties of an ICT system, like confidentiality, data integrity, user authentication, and access control mechanisms.
412
All ICT users also have to understand that besides risks for their privacy, other risks may exist for their local environment, for a larger community, or even for the public. Adequate information about technical attacking techniques (e.g. viruses, trojan horses), and of non-technical attacking possibilities (e.g. social engineering)74 should be widely available to the public. An educational program should include some general procedures for intrusion prevention, intrusion detection, damage analysis, and recovery mechanisms. All educational curricula must incorporate codes of ethical conduct for ICT activities and begin at the primary school level and extend through secondary and tertiary levels and be incorporated into training programs in the workplace, community centers, and other venues for individual citizens. The I S 0 Code of Practice for information security defines the 10 guiding principles which should be considered and presented to all ICT users according to their individual needs, skills, and b a c k g r o ~ n d . ~ ~ Along the same lines, the UN publication Information Insecurity: a survival guide to the unchartered territories of cyber-threats and cyber-security presents a detailed description of the information security problems we have to face, and it includes all relevant information for prevention and actions. Together, with the cited sources and examples, it forms an excellent framework and source for assembling educational programs as discussed above. Numerous other organizations have compiled valuable materials in this area.76 To provide all kinds of users with the required input on information security issues, educational curricula, as well as decision support and advisory information, this content should be distributed not only by printed articles and books, but also by the use of new media, ICT products, and/or the Internet. For example, educational curricula can be utilized in teleteaching and intelligent tutoring systems, enabling students to learn about this subject independent of time and location. Another technical approach could offer information security expertise via information bases, or knowledge bases, via an expert system interface. The expert system interface could be adapted according to a user's requirements, or skills, thus enabling goal-directed access to information and expertise.77
7.
Due diligence and accountability should be required of chief executive officers and public and private owners to institutionalize security management processes, assess their risks, and protect their information infrastructure assets, data, and personnel. The potential of market forces should be fully utilized to encourage private sector companies to protect their information networks, systems, and data. This process could include information security statements in filings for publicly traded companies, minimum insurance requirements for coverage of cyber incidents, and return on investment analyses.
Corporate directors and officers have a fiduciary duty of care to protect corporate assets. Since an estimated 80 percent of corporate assets today are digital:' it logically follows that oversight of information security falls within the duty owed by officers and directors in conducting the operations of a corporation. Today, it is increasingly clear
413
that officers and boards of directors have a corporate governance responsibility with respect to the security of company data, systems, and networks. Hacking, denial of service attacks, economic espionage, and insider misuse of data and systems are commonplace and threaten the profitability of every business, leaving officers and directors vulnerable to lawsuits and civil and criminal penalties. To date, no shareholder suit has been brought against officers or directors for failure to take necessary steps to protect corporate systems and data, however, shareholders may have a valid basis for such derivative The majority of U.S. jurisdictions follow the business judgment rule that the standard of care is that which a reasonably prudent director of a similar corporation would have used. The recent Delaware case, Caremark International Inc. Derivative Litigation, held that, “a director’s obligation includes a duty to attempt in good faith to assure that a corporate information and reporting system, which the board concludes is adequate, exists, and that failure to do so under certain circumstances may, in theory at least, render a director liable for losses caused by non-compliance with applicable legal standards”. The recent Caremark case noted that officerldirector liability can arise in two contexts: (1) from losses arising out of ill-advised or negligent board decisions (which are broadly protected by the business judgment rule so long as the decision was reached out of a process that was rational or employed in a good faith effort) and (2) from circumstances where the board failed to act in circumstances where “due attention” would have prevented the loss. In the latter situation, the Caremark court noted that: [I]t would, in my opinion, be a mistake to conclude that . . . corporate boards may satisfy their obligation to be reasonably informed concerning the corporation, without assuring themselves that information and reporting systems exist in the organization that are reasonably designed to prove to senior management and to the board itself timely, accurate information sufficient to allow management and the board, each within its scope, to reach informed judgments concerning both the corporation’s compliance with law and its business performance. Obviously the level of detail that is appropriate for such an information system is a question of business judgment. . . But it is important that the board exercise a good faith judgment that the corporation’s information and reporting system is in concept and design adequate to assure the board that appropriate information will come to its attention in a timely manner as a matter of ordinary operations, so that it may satisfy its responsibility. Caremark International Inc. Derivative Litigation, 698 A.2d 959 @el. Ch. 1996). The Caremark case could provide a basis for a shareholder suit against officers and directors of U.S. companies for failure to implement an information and reporting system on the security of corporate networks and data such that it could (1) determine it is adequately meeting statutory, regulatory, or contractual obligations to protect certain data from theft, disclosure or inappropriate use and (2) be assured that the data critical to normal business operations, share price, and market share is protected.80
414
There are also high risk situations where higher standards apply to directors and officers, such as acquisitions, takeovers, responses to shareholder suits, and distribution of assets to shareholders in preference over creditors. In these circumstances, directors and officers are required to obtain professional assistance or perform adequate analyses to mitigate the risks that ordinarily accompany these activities. Some information assurance experts assert that a “higher degree of care will also be required of Directors and Officers regarding the complex nature of issues involved in information assurance.”” Securities laws and regulations require public corporations to adequately disclose in public filings and public communications relevant risks to the corporation and its assets. The U.S. Sarbanes-Oxley Act requires management’s attestation that information assets are protected. Additional exposure is caused by insurance companies now routinely excluding hacking and IT-related incidents from general liability policies. Also, senior management in certain industry sectors may be subject to civil and criminal penalties for inadequate security and privacy of protected classes of data. And legal actions continue to mount against corporations for security and privacy breaches. The Independent Director put this in the context of information systems by reporting that: Management of information risk is central to the success of any organization operating today. For Directors, this means that Board performance is increasingly being judged by how well their company measures up to internationally-accepted codes and guidelines on preferred Information Assurance practice.” Additionally, when an organization is a victim of an attack on its information systems, whether from an insider or an outside bad actor, previous studies have shown that this can result in a lack of confidence in the company and even a drop in the company stock price.83 Consequently, shareholders may also initiate a derivative suit for loss to stock price or market share caused by inadequate attention by officers and directors to information ~ecurity.’~ According to the SANS Institute, the seven top management errors that lead to computer securitv vulnerabilities are: “1 Assign untrained people to maintain security and provide neither the training nor the time to make it possible to do the job. 2. Fail to understand the relationship of information security to the business problem - they understand physical security but do not see the consequences of poor information security. 3. Fail to deal with the operational aspects of security: make a few fixes and then not allow the follow through necessary to ensure the problems stay fixed. Rely primarily on a firewall. 4. 5. Fail to realize how much money their information and organizational reputations are worth. 6 . Authorize reactive, short-term fixes so problems re-emerge rapidly. 7. Pretend the problem will go away if they ignore it.”85
415
8.
In parallel, to the elaboration and harmonization of national criminal codes, there should also be an effort to work toward equivalent civil responsibility laws worldwide. Civil responsibility should also be established for neglect, violation of fiduciary duties, inadequate risk assessment, and harm caused by cyber criminal and cyber terrorist activities.
Legal action taken in courts and by regulatory agencies and underwriting requirements by insurance companies are pushing civil responsibility for information security. Action taken in multinational fora is also expected to impact corporate liability and officer/director responsibility. Article 12 of the Council of Europe Convention on Cybercrime (CoE Convention) requires signatory states to establish laws that hold companies civilly, administratively, or criminally liable for cybercrimes that benefit the company and were made possible due to the lack of supervision or control by someone in a senior management position, such as an officer or director. Article 9 of the European Union’s proposal for a Council Framework Decision on attacks against information systems mirrors the CoE language. These provisions have been cited as an example of emulation for a broader international constituency in light of the need to be adapted for insertion into the new Model Law on Cyberspace.
9.
Among the specific and concrete actions that should be considered is the possibility that commercial off-the-shelf (COTS) hardware, firmware, and software should be open source or at least be certified.
The concept of “open source” is now getting wide attention from a global community of users and developers. Open source does not refer to the price of software; it may be distributed free of charge or for a fee. The concept of open source or “free software” lies in the freedom associated with the code. This freedom, however, is contained within set limitations. An open source licenses6 provides freedom to any programmer to use the code, but defines the social parameters programmers must observe regarding the code. Open source generally means that: 1. The software is developed by a community of programmers, usually from around the globe. 2. The source code is distributed or easily available either without charge or for a minimal fee.87 3. Improvements, changes, and corrections may be made to the software, but these must also be freely distributed without attempt to “privatize” the program. The license may require the source code to be distributed separately from modifications contained in “patch files”, it may completely restrict distribution of modified source code, or it may require derived works to be distributed under a different name or version number from the original. 4. The copyright is held by the original author(s). 5. The rights attached to the program must apply to all to whom the program is distributed, without restriction that it be used for only a certain business, etc. or without restriction that any other software distributed with the program need be open source.
416
6. The license must be technology neutral.88 In a nutshell, open source can generally be referred to as “an approach to software development with unique licensing arrangements and a community-based method of pr~gramming”.~~ A reverse concept from commercial software licenses that restrict distribution, sale, modification, use, etc., open source provides the global community of programmers access to source code and provides “freedom” to work within a community of accepted norms with respect to how that software code is handled, modified, distributed, used, etc.” Because the term “open source” is a descriptive term, it cannot be protected by a trademark. Therefore, in order to “mark” software that is distributed under a license that conforms to the Open Source Initiative (OSI) definition, the OSI has registered a certification mark “OSI Certified” for this single purpose and has created a graphical certification mark for it. OSI maintains a list of registered license^.^' The Linux operating system is perhaps the best known open source software example. Apache, BIND, Netscape, and GNU Linux which is the open source program for Red Hat, are others.92 The OSI definition and its certification mark are not only applicable for software programs, but also for firmware programs offering an applicationoriented usage of microprocessors, and of digital control and processing units (e.g., by means of Read-only Memory (ROM’s)). An open source approach is not as easily applied to hardware. There is no standardized definition and understanding available for open source hardware, as there is for software or firmware. One obvious reason lies in the lack of an easy or inexpensive method for copying hardware, such as exists for software or firmware programs. However, in 1997, some ICT hardware manufacturers formed an Open Hardware Certification Program as a self-certification program for hardware manufacturers whose hardware is Linux or FreeBSD ready.93 Hardware with an HDL-specified hardware description (which means that a hardware device is precisely specified by a HardwareDescription-Language program) enables eas copying and distribution of the hardware’s specifications, but not of the hardware itself. With respect to ICT security considerations, open source or OSI certified programs could function in the marketplace to provide increased confidence in commercial off-theshelf (COTS) products by providing: + An approved license; + A complete and certified description of the software or firmware and its functionalities or operations; and + An understanding of its compatibilities and implementation. From the COTS developers’ point of view, however, traditional, commercially licensed software can have market advantages over open source. From the customer’s point of view, open source enables a product’s user to adjust, refine, adapt, or enlarge the product coincident with its specification and according to the customer’s specific requirements. The open source movement is gaining momentum, especially in developing countries where governments and businesses chafe against high license fees for Microsoft and other proprietary software products. The movement is still relatively young and refinements, as well as additional quality measures and specification standards, are certain to follow.
z
417
10.
Information security issues should also be addressed in forthcoming multilateral meetings. Regional organizations should also add to national and international efforts to combat attacks in cyberspace in their respective regional contexts.
In addition to action taken in the UN and the Council of Europe, activities regarding information security and cybercrime should proceed in other fora, including regional and multilateral organizations and meetings. Regional. efforts consistent with the global developing legal framework are encouraged. Regional activities are often very productive because consensus is easier to reach within regional organizations and linkages are typically stronger than those in international fora. Additionally, certain actions that would promote information security and a harmonized global legal framework would be appropriate for discussion in the World Trade Organization Doha Round.
11.
International law enforcement organizations should assume a stronger role in the international promotion of cybercrime issues. The competences and functions of Interpol and, in the European context, Europol, should be substantially strengthened, including by examining their investigative options.
Disparities in the international legal environment greatly handicap law enforcement activities and often make it impossible to proceed in investigating cybercrime cases and bringing the perpetrators to justice. The speed and flexibility of cyber attacks (they can take place in an instant, or can be spread out over extended peiriods of time in a “low and slow” attack scenario that can be very difficult to detect) pose significant legal challenges to our traditional law enforcement environment. Particularly vexing legal issues include, but are not limited to: intercepting communications, searching and seizing electronic evidence, differing requirements for archiving logs of transactions and traffic generated at computer and communication systems, obtaining information from communication and Internet service providers, and ensuring validity of cybercrime evidence across a variety of legal jurisdictions. International law enforcement initiatives can leverage national efforts and create momentum for change. The EU has addressed the cooperation of international law enforcement with respect to c ybercrime through the European Police Office ( E ~ r o p o l ) .Headquartered ~~ in The Hague, The Netherlands, Europol is the EU’s law enforcement organization responsible for improving the effectiveness and cooperation between competent authorities in EU Member States. It was established on February 7, 1992, under the Treaty on European Union and is accountable to the Council of Ministers for Justice and Home Affairs. Europol became fully operational on July 1, 1999. Its mandate includes preventing and combating terrorism, drug trafficking, and other serious forms of international organized crime, such as immigration networks, vehicle trafficking, trafficking in human beings including child pornography, forgery of money and other means of payment, money laundering, and trafficking in radioactive and nuclear substances.
418
Europol has approximately 250 members on staff, all of whom have been assigned by various EU member nations. Approximately 45 of these staff members - known as Europol Liaison Officers (ELOs) - represent their nation’s various law enforcement agencies such as police, customs, gendarmerie, and immigration services.96 Europol recently completed the phased deployment of The Europol Computer System (TECS). The new computer system is specifically designed to facilitate the sharing and analysis of criminal data between EU member nations and law enforcement organizations in other countries. Each EU member nation has assigned two Data Protection Experts to Europol to closely monitor how personal data is stored and used. In September 2000, the EU’s Council of Ministers for Justice and Home Affairs asked EU member nations to start responding to requests from Europol to investigate specific cases, and keep Europol informed about the status and results of the investigation. Since November 2000, EU member nations have been able to leverage the resources of Europol National Units (ENUs) on joint investigations in accordance with the Europol Convention9’ and its implementing rules. The European Police Chiefs Operational Task Force98 coordinates its activities with Europol in combating transnational crime. The International Criminal Police Organization (Interpol) was founded in 1923 and has been located in Lyon, France since 1989. Interpol is an important link among law enforcement organizations globally.99 Interpol has 178 member countries and maintains close working relationships with dozens of intergovernmental bodies such as the Council of Europe and World Customs Organization. Interpol’s primary mission is to promote the widest possible mutual assistance between all criminal police authorities. Interpol has a system of offices around the world referred to as National Central Bureaus (NCBs). Each of its 178 member nations has an NCB station, generally within that nation’s capital. One or more local law enforcement agencies are responsible for staffing the NCB and represent national law enforcement to Interpol. For example, in Canada, the Royal Canadian Mounted Police (RCMP) staff and support the NCB in Ottawa. Should a police officer in Montreal or Winnipeg need something from the police in Gaberone, Botswana, the Montreal police would route their request through their police computer systems to the NCB in Ottawa. The RCMP staff would then forward that request via a private encrypted computer network to the Interpol Secretariat General in Lyon, France. The bureau receiving the message at the Secretariat would read the message and forward it to the necessary agency in Botswana. Each of the 178 countries participating in the Interpol system has access to special computer and telephone systems to facilitate the transfer of this information. Interpol has been actively involved in combating Information Technology Crime (ITC) for a number of years. The Interpol General Secretariat has harnessed the expertise of its members in the field of ITC through “working parties” or groups of experts. Each working party consists of the Heads or experienced members of national computer crime units. Working parties are designed to reflect regional expertise and are established in Europe, Asia, the Americas, and Africa, although each is in different stages of development. In addition, Interpol has created several handbooks and computer crime manuals that it distributes to law enforcement agencies worldwide to use as best practice guides. Interpol currently has a number of ongoing projects related to high technology crime, including information sharing mechanisms for law enforcement and a 24 hour17
419
day a week point-of-contact network to allow investigators in one jurisdiction to locate and communicate with their counterparts abroad.'O0
12.
The international science community should more vigorously address the scientific and technological issues that intersect with the legal and policy aspects of information security, including the use of ICTs and their impact on privacy and individual rights.
Increasingly, we realize that the globally connected network is a multidisciplinary effort that combines scientific and technological achievements with legal and policy considerations. Over the past few years, a legal and policy framework has developed that, in large part, is responsive to both the capabilities of networked communications and the vulnerabilities of Internet protocols, software, and networks. The ability of governments and private sector entities to access, gather, and retain vast amounts of information about Internet users has raised concerns of privacy groups, consumer advocates, and civil libertarians. Likewise, they have also been alarmed by government use of the Internet and ICTs in national and global surveillance and their potential government access to Internet account and traffic data. To date, there has been little interaction and coordination between the scientific and technological communities and the 1egaYpolicy community. While generally aware of each other's endeavors, there has been minimal effort to identify critical intersection points to engage in multidisciplinary initiatives to resolve critical information security problems. It is incumbent upon the scientists and technologists to bring together stakeholders from the legal and policy realms to explain the capabilities and vulnerabilities of ICTs and to begin a dialogue to bridge the gaps in understanding. For example, legislators and policymakers are currently developing privacy and security laws, often without a clear understanding of whether they are actually addressing the issues caused by technological weaknesses and vulnerabilities or merely papering over a problem area.
A.
Technologies With Significant Legal and Policy Imdications
1. Encryption, Signatures, and Authentication Cryptography has become an integral part of seeking to assure an acceptable level of security and privacy of communications and data storage. The development and use of sophisticated, strong cryptography has a long history as a technique used by governments to protect sensitive information. The development of public key cryptography"' in 1975, and the subsequent evolution of that approach have put strong cryptography in the hands of private enterprises and the general public. Today, research and development into increasingly stronger, more efficient, and widely-usable encryption techniques continues at a high level. For years, legal and policy conflicts swirled around the public use of strong encryption technologies. The U.S., in particular, tried to regulate public use of encryption and the export of low-level encryption technologies and pushed legislative agendas mandating key escrow or embedded chips, arguing law enforcement would be stymied without such controls. Fierce resistance by industry, academia, scientists,
420
technologists, and policymakers ultimately defeated these efforts and the unregulated public use of encryption became the global standard. Today, only a few countries regulate public use of encryption, although many countries control the export of powerful, dual-use encryption technologies. A few countries, such as the U.K., require assistance with decryption or demand the encryption key be given to law enforcement upon request.Io2 Overall, governments around the globe have concluded that the benefits of encryption outweigh the negative consequences of encrypted communications by criminals. As lawmakers moved away from controlling encryption, their understanding of the importance of information security resulted in the enactment of laws and regulations that promote the use of authentication and authorization technologies. There is little understanding, however, outside the scientific and technical communities regarding the capabilities to decrypt messages either real-time or offline. As more evidence mounts that A1 Qaeda terrorists are using encryption technologies to protect their communi~ations,‘~~ the old fears surrounding encryption begin to surface once more. Because innovations are constantly changing both the state of encryption technologies and the ability to decipher these communications, a continuing dialogue between scientists, technologists, policymakers, and stakeholders is critical. 2. Tracking and Tracing Internet Communications
A technology issue central to deterring cyber attacks on information infrastructures is the degree to which attacks can be tracked to their origin. With the present TCP/lP protocol, there is very little ability to track and trace Internet attacks to their source.1M For example, information in an IP packet can easily be modified, the source address can be forged, and communications can be woven through intermediary hosts prior to reaching its destination (“packet l a ~ n d e r i n g ” ) . ’ The ~ ~ critical link between technology and policy today is succinctly articulated by CERTKC’s Howard Lipson: In this high-threat, target-rich environment, the technical ability to reliably track and trace intruders (supported by international agreements on monitoring and information sharing, and by international law proscribing attacks and specifying sanctions and punishment) is an indispensable element for enabling the continued use of the Internet to provide many of the essential services that societies depend on.Io6 Even with an accommodating policy environment, ISPs are likely to require both technical assistance and financial incentives to support tracking and tracing endeavors due to the cost and burden they impose on their operations. Emerging next-generation standards and protocols from the Internet Engineering Task Force (IETF) promise to enable improved security and significantly greater tracking and tracing of cyber-attacks. IPsec is an emerging security standard for IP that provides for packet authentication and confidentiality and can be used to cryptographically authenticate a packet’s source address. The Internet Protocol Version 6 (IPv6) is the next generation standard protocol that is slowly replacing the current version, which is IPv4. The security features of IPsec are made available in every P v 6 implementation, although the use of IPsec features is optional. Moreover, IPv6‘s expanded header size can enable
421
more tracking and audit data to be stored. Its increased address space would make it possible (though not a requirement) for every network device to be assigned a static IF' address, making it easier to link a particular IF' address with an entity or individual. The adoption of IPv6 by the user community is proceeding slowly, however, due to high conversion costs.'07 Most tracking and tracing approaches are only effective against attacks that generate large floods of attack packets. However, there is promising ongoing research focused on the capability to track even single attack packets to their source. Such a tracking capability would require the storage, for some limited time, of a digest of all packets seen by participating routers. This would require very large data storage resources, even if only a small fraction of each packet is retained. Such large-scale storage has significant privacy im lications, and is clouded with jurisdictional, legal, and law enforcement considerations. Thus, the dialogue between scientists, technologists, and policymakers is all the more critical during this time of tran':tion when cyber attacks are on the rise and our ability to track and trace them is limited. Howard Lipson wisely notes: The ability to accurately and precisely assign responsibility for cyber-attacks to entities or individuals (or to interrupt attacks in progress) would allow society's legal, political, and economic mechanisms to work both domestically and internationally, to deter future attacks and motivate evolutionary improvements in relevant laws, treaties, policies, and engineering technology.. .. However, improvements to current Internet technology, including improved protocols, cannot succeed without an in-depth understanding and inclusion of policy issues to specify what information can be collected, shared, or retained, and how cooperation across administrative, jurisdictional, and national boundaries is to be accomplished. Nor can policy alone, with only high-level agreements in principle, create an effective tracking and tracing infrastructure that would support multilateral rechnical cooperation in the face of attacks rapidly propagating across"'the global Internet. To be of value, the engineering design of tracking and tracing technologies must be informed by policy considerations, and policy formulations must be guided by what is technically feasible and practical. International efforts to track and trace cyber-attacks must be supported by intense technical cooperation and collaboration in the form of a multilateral research, engineering, and technical advisory group that can provide the in-depth technical skill and training to significantly improve the capabilities of incident response teams and law e n f o r ~ e m e n t . ' ~ ~ Anonymizer technologies can defeat tracking and tracing capabilities. These technologies are extremely controversial due to their ability to protect privacy on the one hand, while defeating the ability of law enforcement and private sector entities to track and trace attacks and illegal conduct.
IS
422
3. Response and Recovery Technologies Despite the theoretical and practical advances in tracking capabilities in the future, the prudent course of action for protecting information infrastructures is to adopt selfhealing or self-mitigating architectures and operational procedures that are survivable in the face of sophisticated attacks. Survivability strategies include sophisticated schemes to simulate, detect, and respond to attacks whether from the outside or inside of the system.'" This area will require continuing technical, legal, and policy collaboration, but the rewards could be rich. 4. Multilateral, Multidisciplinary Technical Research, Engineering, and Advisory Capability
Many nations are beginning to understand that security of cyberspace requires a strategy that is linked to a nation's economic and national security interests. In February 2003, the US. released its National Strategy to Secure Cyberspace. The Strategy is intended to help the U.S. protect its critical infrastructures and to reduce vulnerabilities that can be exploited in order to "ensure that such disruptions of cyberspace are infrequent, of minimal duration, manageable, and cause the least public damage"."' Other nations are similarly taking a national look at how their public and private sectors are securing critical information infrastructures and the relationship between cyber attacks and national and economic security. Numerous technical information security activities have also been undertaken by the U.S. National Institute of Standards and Technology (NIST), resulting in several government technical standards and criteria for security products. As a forerunner, in 1995, the British Standards Institution developed British Standard 7799, a Code of Practice for Information Security Management. This standard has now been accepted as an international standard, ISO/IEC 17799.II2 Although international standards setting bodies, such as the IETF and IEEE,'I3 have been working closely in the area of cyber security and infrastructure protection for years, there is a lack of multidisciplinary collaboration on technical, legal, and policy issues at the nation state level. The Internet Society (ISOC), the main governing body of the Internet, presently covers some of this ground, but it is an independent, professional membership society comprised of more than 150 organizations and 11,000 individual members from 182 countries. It is not a multinational body of nation states that collectively discusses the array of issues concerned with cyber security and reaches agreements on cooperation, legitimate actions, and penal codes. It is impossible for any country to unilaterally achieve security in a globally connected network environment. Again, CERTKC's Howard Lipson, recognizes this void: Regardless of the precise organizational structure, a multilateral technical research, engineering, and advisory capability is essential to (a) research and recommend the best tracking and tracing techniques and practices, (b) provide ongoing support for a multilateral tracking and tracing capability, (c) provide ongoing training and awareness for cooperating incident response and
423 investigatory teams world-wide, (d) make recommendations to international engineering bodies, such as the Internet Engineering Task Force (IETF), for protocol improvements and standards creation in support of member states’ requirements for tracking and tracing attackers, (e) interact with those creating cyber-law and policy to ensure that the technical and non-technical approaches complement and support each other, (f) help assure that the tracking and tracing infrastructures and technologies of cooperating entities can interoperate, and (g) assess the results of cooperation already undertaken by technical and law enforcement agencies, in order to provide feedback for continual improvement.Il4
B.
Examples of Technologies Engenderinp Potential Conflict with Human Rights
1. Data mining, profiling and biometric technologies Of great concern since September 11, are information processing and retrieval technologies aimed at detecting and identifying terrorists from text-based and networkbased databases through the identification and tracking of the actions of communities, the prototyping and profiling of suspects groups and individuals, and the matching of keywords, phrases, and patterns of expression. These technologies presuppose the existence of very large searchable databases. The concern over the excessive use of data warehousing and mining is exemplified by the debate in the U.S. of the Total Information Awareness (TIA) ~ r o g r a m ”being ~ promoted by the U.S. Defense Advanced Research Projects Agency (DARPA). According to DARPA, TIA is developing: “1) architectures for a large-scale counter-terrorism database, for system elements associated with database population, and for integrating algorithms and mixed-initiative analytical tools; 2) novel methods for populating the database from existing sources, creating innovative new sources, and inventing new algorithms for mining, combining, and refining information for subsequent inclusion into the database; and, 3) revolutionary new models, algorithms, methods, tools, and techniques for analyzing and correlating information in the database to derive actionable intelligence.””6 DARPA is also developing Human Identification at a Distance ( H ~ m a n D ) ” ~ which is a suite of automated biometric identification technologies to detect, recognize, and identify humans at great distances. TIA would monitor the daily personal transactions by Americans and others, including tracking the use of passports, driver’s licenses, credit cards, airline tickets, and rental cars. Privacy groups and civil libertarian organizations immediately raised 1984 Orwellian “Big Brother” concerns over such government use of these technologies. The U.S. Congress quickly became involved. Senator Patrick Leahy noted in a letter to U.S. Attorney General John Ashcroft that: Collection and use by government law enforcement agencies of such commercial transactional data on law-abiding Americans poses unique
424 issues and concerns, however. These concerns include the specter of excessive government surveillance that may intrude on important privacy interests andchill the exercise of First Amendment-protected speech and associational rights.’18 Subsequently, the U S . Congress has blocked funding for the TIA program.”’ However, this is but one small system out a vast array of government systems around the globe that uses ICTs to monitor, track, and keep information on the activities and movements of people inside their countries. Authoritarian regimes routinely block access to certain Internet sites, and because they are also usually the monopoly provider of communications, they have unfettered access to an array of communication traffic and content data. However, even democracies such as the U.S. have developed sophisticated systems to monitor email traffic. The “Carnivore” system, developed by the FBI, can be installed on an ISP to monitor all traffic moving through that provider. Although the FBI claims the system is designed to “filter” traffic and allow investigators to see only those packets the FBI is lawfully authorized to obtain, privacy and civil liberties groups remain skeptical. 120 2. Global electronic surveillance
The ECHELON system is an “automated global interception and relay system operated by the intelligence agencies in five nations:” the U.S., U.K., Canada, Australia, and New Zealand, with the U S . National Security Agency at the helm.’21 A provisional report of the European Parliament confirms that “the existence of a global system for intercepting communications, operating by means of cooperation proportionate to their capabilities among the USA, the UK, Canada, Australia and New Zealand under the UKUSA Agreement, is no longer in doubt.”’22 The report further confirms that “the purpose of the system is to intercept private and commercial communications, and not military comm~nications”.’~~ This system and its potential for violating civil liberties of citizens has been the subject of inquiry by the legislatures of the Netherlands, Italy, and the United States among others.lZ4 3. Anonymity, privacy, and freedom of expression
Anonymity and privacy are frequently used interchangeably, especially in colloquial speech. Anonymity, seen as a part of privacy (privacy of identity), can be an important means of preserving international human rights and freedom of expression. Lack of anonymity in an expanding world of information technology makes it increasingly easy for private sector entities (with particular regard to economic interests) to gather vast amounts of information and track Internet activity and for governments to conduct widespread surveillance on individuals and groups. Lack of anonymity, combined with “passive” monitoring techniques such as “cookies” and the more intrusive “clickstream” monitoring (a page-by-page tracking as a person wanders through the Internet) allows private sector entities to assemble detailed dossiers on individuals. This erosion of privacy is compounded by the weak privacy laws and regulations in the US., but is countered by the more stringent data protection afforded by the European Union.
425 A countervailing consideration is that the “anonymity enjoyed by today’s cyber attackers poses a grave threat to the global information society, the progress of an information-based international economy, and the advancement of global collaboration and cooperation in all areas of human endeavor”.125 With respect to malicious cyber attacks by individual hackers and the more ominous case of attacks by nation states (including acts of cyber warfare),lZ6 the ability to deter attacks, obtain redress, or otherwise hold attackers accountable is directly linked to the ability to identify the sender and origin of the c o m m ~ n i c a t i o n . ’Therefore, ~~ it is imperative that interests in tracking and tracing be balanced with legitimate privacy interests and rights provided under international law.
13.
The international scientific community, and in particular the World Federation of Scientists, should assist developing countries and donor organizations to understand better how ICTs can further development in an environment that promotes information security and bridges the Digital Divide.
Much of the work in addressing developmental and digital divide issues is seen as falling within the purview of political and economic decisionmakers. However, the scientific community make significant contributions in this area because, among other reasons, of the rapid growth of peer-to-peer scientific networks which offer low-cost opportunities and solutions for developing countries. ICTs bring both opportunities and challenges to developing countries.’28 The G8, World Bank, United Nations (UN), and U.S. Agency for International Development (USAID) are each committed to bridging the global “Digital Divide”.129 The donor community130also understands that ICTs are a powerful development tool that can help boost economies, increase competitiveness, attract foreign direct investment (FDI), and raise the skill level of the workforce in developing countries. Developing countries also realize the potential impact of technology, and many are launching their own ICT initiatives and aggressively competing for donor funds to assist them. Internet growth works in their favor. Today, there are approximately 600 million people connected to the Internet. However, that online population accounts for only 10% of a world population of about 6 billion people. Since 65% of Americans are already online,131 we can expect some of the highest connectivity increases to be in the 180 developing countries around the globe. Indeed, Forrester Research predicts that by 2007, 70% of software programming will be performed in developing c o u n t r i e ~ . ‘ ~ ~ Thus, developing countries have an unprecedented opportunity to seize upon the advantages of ICTs to propel their progression toward industrialization, market economies, and social advancements. These opportunities, many of which are directly dependent on inputs from the scientific community, include: Attracting foreign direct investment to (a) build infrastructure, (b) launch ICT t projects, (c) partner with donor organizations and governments on pilot projects, and (d) tap undeveloped or under-developed markets. t Privatizing and liberalizing monopoly providers to introduce competition, lower prices, and advance the deployment and utilization of ICTs.
426
Attracting data processing applications such as data entry, customer service and telemarketing operations, records processing (accounts receivable, accounts payable, general ledger, etc.), order entry, inventory control, databank development, data storage operations, remote systems administration, etc. Attracting Internet start-up companies, e-commerce operations, and software development centers. Developing telemedicine and health care centers. Using ICTs for distance learning, education, brokerage services, and building workforce skills, Using ICTs for agri-business and agricultural information and industry sector support. Attracting light manufacturing operations. Modernizing the financial sector. Fostering the growth of small and medium-sized enterprises (SMEs) to spur job creation, innovation, flexibility, and competitiveness. Reforming and automating court administration and case management and availability of judicial information. While the contribution of the scientific community could be a force-multiplier, each of these opportunities is largely dependent upon the development of the legal and regulatory framework to support these activities. The legal framework is one of the most important factors because it touches upon all aspects of commerce, is critical to attracting investment, and is at the core of providing certainty to business operations. The term “legal framework” also includes public policy, which forms the underlying foundation of government support for ICTs and a favorable business environment. Information and infrastructure security are two of the most important components. With nearly 200 countries connected to the Internet, cybercrime has become a global issue that requires the full participation and cooperation of the public and private sectors in all countries, including the 180 developing countries around the globe. A major component of information and infrastructure security is a nation’s ability to deter, detect, investigate, and prosecute cyber criminal activities. Weaknesses in any of these areas can compromise security not only in that country, but around the globe. This is due to the global, interconnected nature of the Internet and the way in which countries must rely upon each other’s expertise and assistance in addressing cybercrime matters. The confidentiality, integrity, and availability of data and networks - including critical infrastructure - are central to attracting FDI and ICT operations to developing countries. The opportunities associated with ICTs are not guaranteed; they are dependent upon developing countries’ ability to effectively address the additional challenge of cyber security and to take steps to actively participate in the global community in combating cybercrime. Appropriate security laws and regulations are also important because: They protect the integrity of the government and reputation of the country. 4 They help preclude a country from becoming a haven for bad actors, such as 4 terrorists, organized crime, and fraud operations. They help prevent a country from becoming a repository for cyber-criminal 4 data.
427
They instill market confidence and certainty regarding business operations and attract foreign direct investment. They provide protection of classified, secret, confidential and proprietary information, criminal justice data, personal information, and certain categories of public data. They protect consumers and assist law enforcement and intelligence gathering activities. They deter corruption. They increase national security and reduce vulnerabilities to attacks and actions by terrorists and other rogue actors. They help protect corporations against risk of loss of market share, shareholder and class action lawsuits, damage to reputation, fraud, and civil and criminal fines and penalties. They provide a means of prosecution and civil action for acts against information and infrastructure. They increase the chance that electronic evidence in physical-world crimes, such as murder or kidnapping, will be available when needed. They create an atmosphere of stability in which economic and social welfare can flourish. For the most part, developing countries are struggling with how to use e-commerce and ICTs in everyday government and business operations. The lack of an adequate legal framework - especially with respect to information and infrastructure security and computer crime - will diminish or prevent developing countries from grasping ICT opportunities. The reasons are clear: Internet and e-commerce operations require an enabling legal framework that t also provides for security of data and networks. Data processing operations require information and infrastructure security t laws for a safe operating environment and protection of data. Companies will not allow their data to be processed in countries that do not t have adequate legal protections against economic espionage, computer crime, infrastructure attacks, and misuse of telecommunications devices and equipment. t Certain laws, such as the EU data protection directive, require that countries afford equal legal protections against misuse of personal data. Much of the inadequacies in addressing these critical issues in developing countries occur because of shortages in scientific and knowledge-based resources. Much is also due to scarcities in financial resources, which in turn constrict the enormous potential inherent in the large human resource base in the developing world. By helping identify and discover low-cost solutions, and by closer coordination with other relevant partners, the scientific community can unleash these human resources, and place them at the service of the developmental effort. The role of the World Federation of Scientists would be an important catalyst in this effort. Deeper consideration of these issues is indicated in the future. The PMP intends to focus on some of these in subsequent meetings.
428
List of PMP Members William A. Barletta William A. Barletta is Director of the Accelerator and Fusion Research Division and the Office of Homeland Security at Lawrence Berkeley National Laboratory. He is an Editor of Nuclear Instruments and Methods A, an Editor of the Internet Journal of Medical Technology, Chairman of the Board of Governors of the U.S. Particle Accelerator School, and Member of the Governing Board of the Virtual National Laboratory for Heavy Ion Fusion. His recent research has concentrated on cyber security and the application of neutron sources and bright ion beams to nanotechnology and medicine. Olivia A. Bosch Olivia Bosch is currently a Senior Research Fellow in the New Security Issues Programme of The Royal Institute of International Affairs in London. Previously, she worked as a Senior Fellow at the Center for Global Security Research (Lawrence Livermore National Laboratory, Livermore, California) and at the International Institute for Strategic Studies in London. Dmitry Chereshkin Dr. Dmitry S. Cherechkin is an Academician and Vice-president of the Russian Academy of Natural Sciences, where he is a Professor of Computer Sciences in the Institute for Systems Analysis. He currently acts as Deputy Chairman of the Government’s Workshop Group to elaborate the Information Development Strategy of Russia. Ahmad Kamal Ambassador Ahmad Kamal served as a professional diplomat in the Ministry of Foreign Affairs of Pakistan for close to forty years until his retirement in 1999. During this period, he held diplomatic postings in India, Belgium, France, the Soviet Union, Saudia Arabia, the Republic of Korea, and with the United Nations both in Geneva and in New York. He continues to be a Senior Fellow of the United Nations Institute of Training and Research. He is also the Founding President and CEO of The Ambassador’s Club at the United Nations. Andrei V. Krutskikh Prof. Dr. Andrei V. Krutskikh is a diplomat and politologist, specializing in issues of disarmament and international cooperation in the field of science and technology. He has served in diplomatic service in the Foreign Affairs Ministry (VFA) of Russia since 1973. He has been stationed at Russian embassies in the USA and Canada. Dr. Krutskikh was a member of the Russian negotiations teams for the SALT I1 and INF Treaties. At present, he serves as deputy director of the department in the MFA for security, technological, and disarmament affairs. He is a Member of the International Academies on Informatization and Telecommunication and a Professor at the Moscow State InstituteKJniversity on International Relations.
429
Axel H.R. Lehmann Prof. Dr. Lehmann received his Studies of Electrical Engineering (Dip1.-Ing.) and received his doctorate of Informatics at the University of Karlsruhe in Germany. From 1982-1987, he was research assistant and Visiting Professor at the Universities of Karlsruhe and Hamburg. Since 1987, Dr. Lehmann has been Full Professor for Informatics at the Faculty for Informatics, Universitaet der Bundeswehr Muenchen. Major positions and activities include Dean of the Faculty for Informatics (1995-1997) and member of the Academic Senate of the Universitaet. He served as Vice-president and President of the Society of Modeling and Simulation International from 1993-2000 and is a member of an Advisory Council for the Ministry of Science, Culture, and Research, Baden-Wuerttemberg, Germany. Timothy L. Thomas Mr. Thomas works at the Foreign Military Studies Office at the U.S Army’s Fort Leavenworth establishment in Fort Leavenworth, Kansas. Vitali Tsygichko Prof. Dr. Tsygichko is an expert of the Federal Assembly of the Russian Federation and professor at the Institute of Systems Analyses of the Russian Academy of Sciences. The author of six scientific books and more than 200 articles, Dr. Tsygichko is a Full Member of the Russian Academy of Natural Sciences and full professor of cybernetics in the field of system analyses and decision making systems for national security problems. He is a retired colonel and received his Doctor of Technical Sciences (Cybernetics) from Moscow University. Henning Wegener Dr. Henning Wegener serves as Chairman of the World Federation of Scientists Permanent Monitoring Panel on Information Security. A German diplomat and lawyer, Dr. Wegener, received his L.L.B. from the University of Bonn, his M.C.L. from George Washington University, and his LL.M. and J.S.D. from Yale University. He has undertaken further studies at the Sorbonne in Paris. Ambassador Wegener joined the German Federal Foreign Office in 1962. From 1981-1986 he was Ambassador in Geneva, from 1986-1991 he was Assistant Secretary General for Political Affairs of the North Atlantic Treaty Organization in Brussels. Dr. Wegener was Lecturer in Political Science at the Free University of Berlin from 1990-1995, and from 1991-1995 he was Deputy Secretary of the Federal Press and Information Office in Bonn. From 1995-1999 he was Ambassador of Germany to the Kingdom of Spain and to the Principality of Andorra. Since 2000, he has been a consultant in Madrid. He has published extensively on foreign and security policy.
Jody R. Westby Ms. Westby is founder and President of The Work-IT Group, specializing in privacy and security, cybercrime, and information warfare. Previously, Ms. Westby was Chief Administrative Officer and Counsel of In-Q-Tel, Inc., a corporation devoted to finding unclassified, commercial solutions to IT problems facing the U.S. intelligence community. As a practicing attorney, Ms. Westby practiced international trade,
430 technology, and intellectual property law with the New York firms of Paul, Weiss, Riflund, Wharton & Garrison and Shearman & Sterling. As Senior Fellow and Director of Information Technology Studies for The Progress & Freedom Foundation, she directed and managed IT projects on an array of cutting-edge issues. Prior to that, Ms. Westby was Director of Domestic Policy for the U.S. Chamber of Commerce. Ms. Westby is chair of the American Bar Association’s Privacy and Computer Crime Committee and was chair, co-author and editor of its International Guide to Combating Cybercrime, International Strategy for Cyberspace Security, and International Corporate Privacy Handbook.
431
ENDNOTES In the context of the work of the PMP and the Recommendations and Explanatory Comments herein, the term “information security” is intended to encompass the broader scope of cyber security, which includes the security of data, applications, operating systems, and networks. Eduardo Gelbstein and Ahmad Kamal, Infinnation Insemti& A Survivalguide to the uncharted ternloties of qberthreats and yber-seeu$y, United Nations ICT Task Force and United Nations Institute of Training and Research, 2nded., Nov. 2002 at 1, httn://www.un.int/kamal/information insecurity (hereinafter “Gelbstein and Kamal”). 3 Howard F. Lipson, Tracking and Tracing Cyber-Attacks:TechnicalCha/knges and GLbal PoLq Issues, CERT Coordination Center, Special Report CMU/SEI-2002-SR-009, Nov. 2002 at 10, http:/ /www.cert.or_e/arch/pdf/02sI009.pdf (hereinafter “Lipson”). 4See CERT/CC Statistics 1988-2003, u//www.cert.org/stats/. Gelbstein and Kamal at 20-21, http://www.un.int/kamal/information insecurity. Richard Power, “2002 CSI/FBI Computer Crime and Security Swvey,” Conqufer Semrig Issues & Trendr, Vol. VIII, No. 1, Spring 2002 at 10-11, http://www.pcsi.com/ndfs/fbi/FBI2002.~df. Nationa/Strateegyf.rHo~e~nd Semtify, Offce of Homeland Security, July 2002 at 30, hnn: //www .caci.com/homeland securitv/nat strat.shtml. Jody R Westby and William A. Barletta, “Public and Private Sector Responsibilities for Information Security,” Mar. 2003 at 2-3, http://www.itis-ev.de/infosecur(citing ” Barton Gellman, “Cyber-Attacks by Al Qaeda Feared,” WashingtonPost, June 26,2002, ~://www.washinmonpost.com/wp-dyn/articles/A507652002jun26.html) (hereinafter Westby and Barletta Public-Private Responsibilities”). Jody R. Westby and William A. Barletta, “Consequence Management of Acts of Disruption,” Aug. 2002 at 3, http://www.itis-ev.de/infosecur (citing “G-7 to Call for Police Network,” Wa//StreetJoumal,Apr. 15,2002 at A4) (hereinafter Westby and Barletta Consequence Management”). Id at 2 (citing “Security: Improvements Needed to Reduce Risk to Critical Federal Operations and Assets,” GAO Testimony of Robert F. Dacey, Director, Information Security Issues, Before the Subcommittee on Government Efficiency, Financial Management, and Intergovernmental Relations, Committee on Government Reform, House of Representatives, Nov. 9,2001, GAO-02-231T at 3). l 1 Id at 2-3 (citing Hanan Sher, “Cyberterror Should Be Int’l Crime,” http://www.newsbytes.com/news/OO/l57986.html). Id at 3 (citing John Lancaster, “Abroad at Home,” Nov. 3,2000, at A31, http:/ /washin~on~ost.com/ac2/wo-dvn/A4288-2OOONov2?lan~~=~rinter). I 3 Bill Miller, ‘Womes of Cyberattacks on U.S. Are Aired,” The Washington Post, Apr. 26,2002 at A26. I4 Vitali Tsygichko, “Cyber Weapons as a New Means of Combat,” Sept. 23,2002 at 4, http://wuw.itisev.de/infosecur (hereinafter “Tsygichko”). Carter Gilmore, “The Future of Information Warfare,” Dec. 28,2001, http:/ /rr.sans.or~/infowar/fi~tureinfowar.php (citing Depaxtment of Defense Dictionary of Military and Associated Terms, Joint Pub. 1-02 at 209). Dorothy E. Denning, “Cyberterrorism,” Testimony before the Special Oversight Panel on Terrorism, Committee on Armed Services, U.S. House of Representatives, May 23,2000..
http::Nwww.terrorism.comldocuments/dennina-testimon~.shtml. I7 Anne Marie Squeo, “US. Studies Using ‘E-Bomb‘ in Iraq,” The Wa//StreetJowna/,Feb. 20,2003 at A3, A9. Gelbstein and Kamal at 3, h t t d /www.un.int/kamal/information insecurity. Id at 8. Westby and Barletta Consequence Management at 1, hap:/ /www.itis-ev.de/infosecur (citing Global Internet Statistics: Sources & References, Global Internet Statistics (by Language), Mar. 31,2002, http://www.dobalreach.biz /elobstats/evol.html). 21 Id (citing Dave Krisula, “The History of the Internet,” Aug. 2001, http://www.davesite.com/webstation/net-history.shtml). 22 These items are described in detail in the paper by Timothy L. Thomas (with Karen Matthews), “The Computer: Cyber Cop or Cyber Criminal?” http: //www.itis-ev.de/infosecur. Dorothy E. Denning, “Activism, Hacktivism, and Cyberterrorism: The Internet as a Tool for Influencing Foreign Policy,” Internet and International Systems: Information Technology and Foreign Policy Decisionmaking Workshop, http: / /www.nautilus.or_P/info-no~c~./workshon/naners/de~~,ht~. 24 Westby and Barletta Consequence Management at 1, http://www.itis-ev.de/infosecur. 25 Ahmad Kamal, “New Forms of Confrontation: Cyber-Terrorism and Cyber-Crime,” Aug. 2002 at 2, httn://www .itis-ev.de/infosecur. l9
432 ~~~
~~~~
Id. 27 The international law aspects of this statement will also be considered in the context of Recommendation 3 and considered in depth in papers by Messrs. Krutskikh and Tsygichko, httn://wa?u.itis-ev.de/infosecur. 28 Gelbstein and Kamal at 123, http://www.un.int/kamal/information insecurity. Id. 30 Jody R. Westby, ed., InternationalGuide to Combating Qberm’me,American Bar Association, Section of Science & Technology Law, Privacy & Computer Crime Committee, 2003 at 11, htto: / /www.ahanet.ore/abanuhs/books/cvbercrime/(hereinafter ‘Westby Cybercrime”). 31 Council of Europe Conventionon Qberm’me- Budapest, 23.XI.2001 (ETS No. 185) (2002), (hereinafter CoE Convention); Press Release, httn: / /con~,entions.coe.int/Treatv/Eru’/CadreListeTraites.htm “Ehdapest, November 2001: opening for signature of the fust international treaty to combat cybercrime,” The . criminal law aspects of Council of Europe, Nov. 14,2001, p information security are further developed in Recommendation 2 and the PMP paper by Henning Wegener, “Guidelines for national criminal codes and their application throughout the international community,”Jan. 2003 at 7, httn:/ /www.itis-ev.de/infosecur (hereinafter ‘Wegener Guidelines”). 32 G8 Recommendntionr on Transnational Crime, Section D High-Tech and Computer-Related Crimes, item 2, 26
.& 33
Bradley Graham, “Bush Orders Guidelines for Cyber-Warfare,”The WadingoonPost, Feb. 7,2003 at A01,
~
. United Nations Commission on International Trade Law (UNCITRAL) Model Law on Electronic Signatures (2001) and Model Law on Electronic Commerce With Guide to Enactment (1996),
34
%
&
.
Wegener Guidelines at 7, htto://www.itis-ev.de/infosecur. 36 Lisa M. Bowman, “Enforcing Laws in a borderless Web,” CNET News.com, http://news.com.com/21001023-927316.html?taz+fg lede; Westby Cybercrime at 54-59, h t t u : / / w w w . a h a n e t . o r e / a b a p u b s . See aho Peter Swire, “Of Elephants, Mice, and Privacy: The International Choice of Law and the Internet,” 32 Int’l Law 991,1016 (1998). 37 Westby Cybercrime at 51-52, http://www.abanet.or~/abapuhs/boo h /cvbercrime / 38 Gelbstein and Kamal at 118, h tD:/www.un.n i t/knma/lnformato i n. 39 P. Meller, “EU pact would criminalize protesters who use the Net,” The New York Times,Feb. 5,2003, httu: //www.iht.com/arucles/88499.htm. Westby Cybercrime at 95-104, ~ u : / / w w w . a b a t i e t . o r e / : i l ~ a u u h s / b o o k s / c y b m e / . 41 “International Monitoring Mechanisms for Critical Information Infrastructure Protection”, Olivia Bosch, httn:/ /www.itis-ev.de/infosecur (hereinafter “Bosch Monitoring”). 42 Westby Cybercrime at 23, @: //www.abanet.ore/abanubs/books/cybercrime/. 43 Wegener Guidelines at 7 , y . Bosch Monitoring at 7, httu:/ /www.ids-ev.de/infosecur. 45 Pmposalfora Council Framework Decision on attacks against information ytems, Commission of the European Communities, Brussels, Apr. 19, 2002, COM(2002) 173 final, adopted by EU Ministers of Justice Mar. 4,2003, httu://eurona.eu.iiit/eur-lex/en/com/pdf/2~02/~om~~~2 0173enOl.udf. 46 Wegener Guidelines at 1-3, http://~..itis-ev.de/mfosecur; Westby Cybercrime at 1-2, httn://~.ww.;ihanet.ore/abapuhs/books/cvbercrime/. 47 Wegener at 4, 14, http://www.itis-ev.de/infosecur. &Y Pmposalfor a CouncilFramework Decision on attach against informationystemJ, Commission of the European Communities, Brussels, Apr. 19,2002, COM(2002) 173 final, adopted by EU Ministers of Justice Mar. 4,2003, httu://euro~a.eu.int/eur-lex/en/com/pdf/20~2/com2002 0173enOl.pdf. 49 Lipson at 3, httn: //~v\cw.cert.or~/archive/ndf/02sr009.udf. 5O TCP/IP (Transmission Control Protocol/Internet Protocol). Lipson at 5, I f . 51 Lipson at 5, httu: //www.cert.or~/arc~~~e/udf/O2sr009.udf. 52 Id. at 13. 53 Id. 54 Id. at 47. 55 Gregory D. Grove, Seymour E. Goodman, and Stephen J. Lukasik, “Cyber-attacks and International Law,” Survival, Vol. 42, No. 3, Autumn 2000 at 100, htt~://suMval.ouniournals.org/c~/content/abstract/42/3/89 35
433
(hereinafter “Grove, Goodman, and Lukasg’). Id at 90. 57 Tsygichko at 5-6, httu://wv.itis-ev.de/infosecur. 58 Westby and Barletta Consequence Management at 9, http://www.itis-ev.de/infosecur. 59 See also Timothy L. Thomas, “Al Qaeda and the Internet: The Danger of ‘Cyberplanning,”’ Parameters, spring 2003, pp. 112-23. 60 Westby and Barletta Consequence Management at 8, http://www.itis-ev.de/infosecu. 61 Grove, Goodman, and Lukasik at 93, htt~://s~uvival.oupjournals.or~/c~/content/abstmct/42/3/89. 62 Id 63 Id. (citing Walter G. Sharp, Sr., Cyberspaceandthe Use oJForce, Aegis Research, Falls Church, VA 1999, at 102). 64 Id. at 95, Timothy L. Thomas (with Karen Matthews), “The Computer: Cyber Cop or Cyber Criminal?”
56
p . Westby and Barletta Consequence Management at 8. Grove, Goodman, and Lukasik at 94,97-100, /89. httu:/ /surviva~.ou~iourna~s.ore/cei/content/abstract/4~/~ 67 Andrey V.Krutskikh, “International Information Security and Negotiations,” Mar. 2003 at p. 3-4, htto://www .itis-ev.de/infosecur (hereinafter “Krutskikh”); fee also Tsygichko, http://www.itis-ev.de/infosecur. 68 Krutskdth at 3, httD://www.itis-ev.de/infosecur. 69 Krutskikh at 9-11, httn://www.itis-e~,.de/infosecur. 70 Joint US-Russia Statement on Common Security Challenges at the Threshold of the 2lsC Century, Seventh Clinton-Yeltsin Summit, Sept. 2,1998, e c u r i ~ at 14-15, httu:/ / ~ ~ ~ . c e i ~ . o r e / f ~ e s / n r o i e c t s / n ~ o / r e s o u r c e s / s u m m i t s ~ . h t m # sKrutskikh http://www .itis-ev.de/infosecur. 71 Krutskikh at 25, bttn://www.itis-ev.de/infosecur. 72 Id. at 29. 73 Grove, Goodman, and Lukasik at 100, http://sunival.oupiournals.or_e/cm/content/~bstract/42/3/89. 74 Social engineering refers to the false representation that one has system administration authorities with the intention of luring the system user into revealing critical authorization or access controls, or similar types of deceptive behavior that enables an unauthorized user access to information or infrastructure. 75 See e.g., “International Standard ISO/IEC 17799: 2000 Code of Practice for Information Security Management, Frequently Asked Questions,” Nov. 2002, http:/ /csrc.nist.~ov/uublications/secpubs/otherpubs/reviso-fao.pdf. 76 Gelbstein and Kamal, http:/ /www.un.int/kamal/information insecurity;see eg., Westby Cybercrime at 16170, http: / /www.abanet.ore/ahapubs/books/cybercrime/; Jody R. Westby, ed., International Strategyjr Cyberspace Seamy, American Bar Association, Section of Science &Technology Law, Privacy & Computer Crime Committee, ABA Publishing, to be published fall 2003. 77 See also Axel Lehmann, “Heightening Public Awareness and Education on Information Security,” httD: / /\vww.itls-ev.de {infosecur. 78 “Cybercrime,”Businesr Week,Feb. 21, 2000. 79 Jody R. Westby, “Protection of Trade Secrets and Confidential Information: How to Guard Against Security Breaches and Economic Espionage,” Inteflectual PmpersCy Counselor, dan. 2000) at 4-5. See, e.g., id.;For a general discussion on corporate liability related to board and officer responsibilities to ensure adequate information and control systems are in place, see Steven G. Schulman and U. Seth Ottensoser, “Duties and Liabilities of Outside Directors to Ensure That Adequate Information and Control Systems are in Place - A Study in Delaware Law and The Private Securities Litigation Reform Act of 1995,” Professional Liability Underwriting Society, 2002 D & 0 Symposium, Feb. 6-7,2002, httu: / /~v.ulusweb.ore/Events/Do/materials/2002/Source/Duties”/a20ando~o20LiabiliU~s.~df. 81 Dr. John H. Nugent, CPA, “Corporate Officer and Director Information Assurance PA) Liability Issues: A Layman’s Perspective,” December 15,2002, httn: / /usmweb.udallas.edu/info assurance. ** Id. (citing Dr. Andrew Rathmell, Chairman of the Information Assurance Advisory Council, “Information Assurance: Protecting your Key Asset,” http://www.iaac.ac.uk). 83 A. Marshall Acuff, Jr,, “Information Security Impacting Securities Valuations: Information Technology and the Internet Changing the Face of Business,” Salomon Smith Barney, 2000, at 3-4, h t t o : / f . 65
66
434
Much of this section was taken from: Jody R. Westby, ed., Inte#ationulStrate~for CyberqaE Secung, American Bar Association, Section of Science & Technology Law, Privacy & Computer Crime Committee, ABA Publishing, to be published fall 2003. “The 7 Top Management Errors that Lead to Computer Security Vulnerabilities,” The SANS Institute, http://www .sans.ore/resources/ errors.ohQ. 86 See h t t b : / / u ~ i v . ~ b e ? i s o u ~ e , o ~ / ~ c e for n ~ eaccess s/ to an array of approved open source licenses. 87 The Open Source Initiative requires free distribution, although a license “shall not restrict any party from selling or giving away the software.. ..The license shall not require a royalty or other fee for such sale.” Open Source Initiative, The Open Source Defintion, htto://oDensource.orc./docs/def orint.oho. David McGowan, “Legal Implications of Open-Source Software,” Univ. of Ill. Law Rev., Vol. No. 1 2001 at 241 (hereinafter referred to as “McGowan”); The Open Source Defmition, Version 1.9, Open Source Initiative, http://opensource.orP/docs/def print.php. Open source licenses are not consistent in intent and meaning of traditional software licenses and have not been tested in court. Id. at 243. 87 Dennis M. Kennedy, “A Primer on Open Source Licensing Legal Issues: Copyright, CopyleFt and Copyfuture,” at 1, http:/ /www.denniskennedy.com/opensourcedmk.pdf (hereinafter “Kennedy”). 9a McGowan at 244-45, httD:/ /ooensource.ore/docs/def Drint.ohD; Kennedy at 3-4, hap: //www.denniskennedv.com/ooensourcedrnk.Ddf. 91 OSI Certification Mark and Program, Open Source Initiative, http://oDensource.ore/docs/cerdficationmark.uho. 92 McGowan at 241, http://opensource.orP/docs/def print.php Kennedy at 1 , 9 httD: / /www.denniskennedv.com/oDensourcedmk.Ddf 93 Open Hardware Certification Program, htto:/ /www.oDen-hardware.ore/. 94 Richard Stallman, “Free Hardware,” htto://features.linuxtodav.com/newsstorv.~h~3?ltsn=l999-06-22-00505-NWLF. 95 See Europol’s website at http://www.europol.eu.int/home.htm. “See htto://www .europol.eu.int/content.htm?links/en.htm for links to EU Member States’ national law enforcement websites, links to European institutions and international organizations, and links to other law enforcement agencies and organizations. 77 The text of the Eumpol Convention can be found at htto: / /www.euroDol.eu.int/content.htm!le~l/conv/eii.htm. 98 See http:/ /www.eurunion.or_p/partner/EUUSTerror/PoliceChiefsTaskForce.htm for more information on the European Police Chiefs Operational Task Force. 99 See Interpol website at httu://www. interuol.int/. Iw see htto://www .herpol.int for further information on Interpol. Much of the commentary to this Recommendation was taken from the Law Enforcement Chapter of the InternationalGuidefor Combating Cybem’me,which was co-authored and edited by Jody Westby. See Westby Cybercrime at 95-98, httD://www .abanet.ore/aha~ubs/boo ks/cvbercrime/. 101 Whitfield Diffie and M.E. Hellman, “New Directions in Cryptography,” IEEE, Trunsuctions on Injmution Theov, Vol. IT-22, Nov. 1976 at 644654. lo2 Westby Cyhercrime at 44,74, htto:/ /www.ahanet.ore/ahaoubs/bookdcvbercrimel (citing C@fograpb and L i b e q 2000:A n InternationalSumy.fEnWtion Pohy, Electronic Privacy Information Center, httD:/ /www2.e~ic.ore/reworts/ cn~t02000). 103 Timothy L. Thomas, “Al Qaeda and the Internet: The Danger of ‘Cyberplanning,”’ Parameters, Spring 2003 at 112. Io4 Lipson at 5,13, httD:/ /www.cert.ore/archive/~df/02sr009.~df. Id. at 13-15. Id. at 16. lO7 Id. at 60-61. 108 Id. at 43. Iw Id. at 63-64 (emphasis added). 110 Howard F. Lipson and David A. Fisher, “Survivability-A New Technical and Business Perspective on Security,” htto: / /aww.cea.or_P/archlve/odf/busperspec.pdZWestby and Barletta Consequence Management at 9-12 httD://www .itis-ev.de/infosecur. . The NafionulStratgy to Secure Cybtmqace, cover letter from President Bush, Feb. 2003, httD:/ /www.whitehouse.vov/uciuh/. 84
I
435 112 ISO/IEC
17799:2000 Information technology -- Code of practice for information security management.
. -htm: 113 Internet E n p e e r i n g Task Force (IETF), http://www.ietf.org, Institute of Electrical and Electronics Engineers (IEEE), http://www.ieee.org. 114 Lipson, p. 48 (emphasis in original), http://www .cert.ore/archive /odf/02sd09.~df. 115 This system is now being referred to as Terrorism Information Awareness program. See “DOD surveillance system renamed, But details of Pentagon data-gatheringproject unchanged,” htt~://www.stacks.msnbc.com/news/916028.asp. 116 “Total Information Awareness p) program being promoted by the US Defense Advanced Research Projects Agency (DARPA), p . 117 “Human ID at a Distance (HumanID)”, http://www.darpa.mil/iao/HID.htm. 11* “Letter to Attorney General John Ashcroft”, US. Senator Patrick Leahy, January 10,2003, htto://www .senate.eov/-leahv/oress/200301/01 1003.html. 11’) “Terrorism spying project to e n d Personal records of millions had been targeted,” Sept. 25,2003,
t
fears, says the Pentagon,” WashingtonTimes, May 21,2003, ht 120 “The Carnivore FOIA Litigation,” http:/ /www.epic.or_e/privacv/camivore/;see ah0 “Internet and Data Interception Capabilities Developed by the FBI,” Statement for the Record of Donald M. Kerr, Assistant Director, Laboratory Division, Federal Bureau of Investigation, Before the United States House of Representatives, Committee on the Judiciary, Subcommittee on the Constitution, July 24,2000, httD://www .fbi.~ov/con~ess/con~essOO/kerr072400.htm; “Carnivore Diagnostic Tool,” Statement for the Record of Donald M. Kerr, Assistant Director, Laboratory Division, Federal Bureau of Investigation, Before the United States Senate, Committee on the Judiciary, Sept. 6,2000, htto://www .fbi.eov/conmess /coneressOO/ kerr090600.htm. lZ1 “Answers to Frequently Asked Questions (FAQ about Echelon,” Feb. 7,2002, httD:/ /archive.ach.orp(echelonwatch/faa. Draj Report on the e*stence ofaglobaJ systemfor the intemption ojpnuate and commerciaJ communicafionr(ECHELON intemption system), section “Motion for a Resolution,” Temporary Committee on the ECHELON Interception System, European Parliament, 18 May 2001, lIwww.euroDarl.eu.int/temDcom/echelonlDdf. httD: ‘23 Id lZ4 Jelle van Buuren, Hearing O n Echelon In Dutch Parliament, Heise Telepolis,Jan. 23,2001 (available at http: 1/www.heise.de/tD/) and htto: / /archrve.aclu.ore/echelonwatch/faa.html. 1zLipson at 4, http://www .cert.ore -/archve/pdf/02sr009.pdf. lZ6 A discussion of cyber attacks from an arms control perspective is presented in V. Tsygichko, “Cyber Weapons as a New Means of Combat,” http://www .itis-ev.de/infosecur. I n Lipson at 18, htt~://www.cert.ore/archive/~df/02sr009.~df. The explanatory comments for this Recommendation are, in large part, taken from the InternationalGuide to Combating Cybm’me, which was written and copyrighted by Jody Westby. The Cybm’me Guide was written to assist developing countries understand cybercrime and the steps they needed to take to become active participants in combating cybercrime on a global scale. See Westby Cybercrime at 11-17, hp://www .abanet.or~/&gubs/boo ks Icybercrime1. “Digital Divide” refers to “The gap between those able to benefit by digital technologies and those who are . . ivi . . not.” SeegThe donor community consists of aid institutions such as The World Bank Group, the U.S. Agency for International Development (USND), United Nations (UN), Canada International Development Agency (CIDA), European bank of Reconstmction and Development (EBRD), Inter-American Development Bank (IADB), and numerous other development banks and assistance organizations. 131 Global Internet Statistics: Sources & References, Global Internet Statistics (by Language), Mar. 31,2002, htto://www.d - obal-reach.biz/globstats/evol.html. 132 “Taking up technology,” Financial Timer,Apr. 2,2002, at 8.
RECENT ACTIVITIES OF PMPs FLOODS AND UNEXPECTED METOROLOGICAL EVENTS, WATER, AND CLIMATE
ROBERT CLARK University of Arizona, Tucson, USA Activities in 2002-2003 of the Permanent Monitoring Panels (PMPs) on Defense Against Floods and Unexpected Meteorological Events; Water; Climate, Ozone and Greenhouse Effects, assisted by the PMP on Pollution, have centered on two main topics: 1. Evaluation of the hydrologic impacts of the Gran Sasso Laboratory extension, and 2. Planning and sustainable development and operation of water and other environmental resources in the Mediterranean region. Meetings were held in Gran Sasso and Rome during the period from 3-7 June 2003 with Italian scientists concerning both topics. The World Federation of Scientists (WFS) Task Force consisted of the following: Robert A. Clark, University of Arizona, USA. Margaret S. Petersen, University of Arizona, USA. Richard Ragaini, Lawrence Livermore National Laboratory, USA. Soroosh Sorooshian, University of Arizona, USA. William A. Sprigg, University of Arizona, USA Aaron Yair, Hebrew University, Israel. The Results of meetings on the main topics were as follows: TOPIC 1 - GRAN SASS0 LABORATORY EXTENSION Introduction The WFS Task Force met with Italian engineers and scientists at Gran Sasso and at the offices of the Istituto Nazional di Fisica Nucleare (INFFN) in Rome to discuss potential impact of the proposed Gran Sasso Laboratory extension on water resources of the local area. Discussions centered on two major technical questions related to the Laboratory extension: 1) Potential impacts on local domestic water supplies, particularly in the Teramo area near the east portals of the A-25 highway tunnels; and 2) Potential impacts of laboratory operation on local water quality. Findings of Fact A. 1.
2.
Findings with regard to water quantity. Infiltration in the Gran Sasso varies seasonally, with about one-third occurring in the summer, indicating that snowmelt is a major contributor to ground water resources of the area. In May-June 1991, fourteen boreholes were drilled within the vicinity of the planned location of the laboratory extension project. No ground water was intercepted by these borings.
436
437 3.
Time-series data show drainage outflow along the highway tunnels from the time the tunnels were bored, 1973 up to 1998. These data indicate that, after initial dewatering, the hydrogeology stabilized to the natural annual variability of the level of the ground-water table.
B.
Findings with regard to water quality. The Laboratory extension project will include provision of a water treatment plant near the west portals of the highway tunnels, and all waste-water from both the existing Gran Sasso Laboratory complex and the Laboratory extension will be carried in a pipe located in the new emergency tunnel to the treatment plant.
C.
Other discussions. Impacts of the existing Gran Sasso Laboratory and potential impacts of the proposed Laboratory extension are not completely understood by the local population.
Conclusions and recommendations A. The location of the proposed laboratory extension is within a geohydrological compartment characterized by low permeability. The expected potential impact of the proposed work on ground water quantity will be negligible. B. The new water treatment plant, included in the proposed laboratory extension located near the west portals, will improve the level of environmental protection against accidental spills in both the existing laboratory facilities and in the proposed extension. Treatment for all waste water will be provided. The new treatment plant should include a holding tank of adequate capacity so that if monitoring indicates a pollution hazard, all polluted discharges can be retained until treated. C. Adequate safeguards should be adopted to ensure that no spills or leaks of hazardous liquids could escape the Laboratory prior to operation of the new water treatment plant. D. Communications between the Laboratory and people in the Gran Sasso area need to be established so that people are given accurate and timely information as to the purpose and activities of the Gran Sasso Laboratory complex and the proposed Laboratory extension and about the potential effects of the laboratory facilities on water resources of the Gran Sasso. TOPIC 2 - SUSTAINABLE DEVELOPMENT IN THE MEDITERRANEAN REGION Introduction Discussions were held in the Enrico Fermi Institute on 6 and 7 June with scientists representing academic institutions in Sicily, Italian governmental agencies, and members of WFS Permanent Monitoring Panels (PMPs). Topics ranged from: Water and environmental protection. Floods and extreme events.
438 Soil, desertification, and remote sensing. Seismology, hazards and risk, supervision and management. It was pointed out that problems range from augmenting domestic water supply, modernizing of hydrometeorological services; training in modem technology, improved communications, flood control, and water quality improvement to impact of climate change. Discussions During the afternoon of 7 June, a separate meeting was held to discuss topics of interest to the PMPs on Floods and Unexpected Meteorological Events, Climate Change, and Pollution. A group representing the Columbia University Lamont Doharty Laboratories, concerned with the PMP on seismicity, was also present. Those attending the meeting included Guissippi Aronica, University of Messina, Sicily. Kathleen Boyer, Columbia University, USA. Robert A. Clark, University of Arizona, USA. Arthur Lerner-Lam, Columbia University, USA. Slobodan Nickovic, University of Malta, Malta. Margaret S. Petersen, University of Arizona, USA. Richard Ragaini, Lawrence Livermore Laboratory, USA. Luca Rossi, Department of Civil Protection, Rome, Italy. Leonard0 Seeber, Columbia University, USA. William A. Sprigg, University of Arizona, USA. Recommendations It was proposed that a meeting of the various PMPs be held on 19 August 2003 in Erice with concerned Italian scientists to discuss possible activities by the PMPs in their specific areas of expertise. The Italian group, headed by Giuseppi Aronica, will develop a list of potential projects to be considered by the various PMPs.
POLLUTION PERMANENT MONITORING PANEL - 2003 REPORT DR. RICHARD C. RAGAINI Department of Environmental Protection, University of California, Lawrence Livermore National Laboratory, Livermore, CA, USA The continuing environmental pollution of earth and the degradation of its natural resources constitutes one of the most significant planetary emergencies today. This emergency is so overwhelming and encompassing, it requires the greatest possible international East-West and North-South co-operation to implement effective ongoing remedies. It is useful to itemize the environmental issues addressed by this PMP, since several PMPs are dealing with various overlapping environmental issues. The Pollution PMP is addressing the following environmental emergencies: Degradation of surface water and ground water quality Degradation of marine and freshwater ecosystems Degradation of urban air quality in mega-cities Impact of air pollution on ecosystems Other environmental emergencies, including global pollution, water quantity issues, ozone depletion and the greenhouse effect, are being addressed by other PMPs. The Pollution PMP coordinates its activities with other relevant PMPs as appropriate. Furthermore, the PMP will provide an informal channel for experts to exchange views and make recommendations regarding environmental pollution. PRIORITIES IN DEALING WITH THE ENVIRONMENTAL EMERGENCIES The PMP on Pollution monitors the following priority issues: Clean-up of existing surface and sub-surface soil and ground-water supplies from industrial and municipal waste-water pollution, agricultural run-off, and military operations Reduction of existing air pollution and resultant health and ecosystem impacts from long-range transport of pollutants and trans-boundary pollution Prevention and/or minimization of future air and water pollution Training scientists & engineers from developing countries to identify, monitor and clean-up soil, water and air pollution ATTENDEES The following scientists listed below attended the August 2002 Pollution PMP meeting: Chairman Dr. Richard C. Ragaini, Lawrence Livermore National Laboratory, USA Dr. Lome G. Everett, University of California at Santa Barbara, USA Prof. Vittorio Ragaini, University of Milan, Italy Dr. Andy Tompson, Lawrence Livermore National Laboratory, USA Prof. Joseph Chahoud, University of Bologna, Italy
439
440
Prof. Ilkay Salihoglu, Middle East Technical University, Ankara, Turkey Prof. Sergio Martellucci, University of Rome, Italy HISTORICAL AREAS OF EMPHASIS OF THE POLLUTION PMP The following Erice workshops and seminar presentations have been sponsored by the Pollution PMP since its beginning in 1997 in order to highlight global and regional impacts of pollution in developing countries:
0
0
1998: Workshop on Impacts of Pharmaceuticals and Disinfectant Byproducts in Sewage Treatment Wastewater Used for Irrigation 1999: Memorandum of Agreement (MOA) between WFS and the U.S. Department of Energy To Conduct Joint Environmental Projects 1999: Seminar Session on Contamination of Groundwater by Hydrocarbons 1999: Workshop on Black Sea Pollution 2000: Seminar Session on Contamination of Groundwater by MTBE 2000: Workshop on Black Sea Pollution by Petroleum Hydrocarbons 2001 : Workshop on Caspian Sea Pollution 2001: Seminar Session on Trans-boundary Water Conflicts 2001: Workshop on Water and Air Impacts of Automotive Emissions in Mega-cities 2002: Seminar Talk on Radioactivity Contamination of Soils and Groundwater 2002: Seminar Talk on Environmental Security in the Middle East and Central Asia 2003: Seminar Session on Water Management Issues in the Middle East 2003: Workshop on Monitoring and Stewardship of Legacy Nuclear and Hazardous Waste Sites
POLLUTION PMP ACTIVITIES DURING 2003 On June 3-5, Richard Ragaini attended meetings at Gran Sasso and in Rome to discuss environmental problems at the Gran Sasso Laboratory, specifically the potential impact of the proposed Laboratory extension on water quality and water quantity resources of the local area. The general conclusion was that the expected potential impact of the proposed work on ground water quantity will be negligible. In addition, the proposed water treatment plant included in the proposed Laboratory extension will be able to detect, divert, retain and treat any future hazardous discharges. The results of these meetings are discussed in detail by Bob Clark in his PMP report. On June 6-7, Richard Ragaini attended planning meetings in Rome to discuss environmental problems in Italy and in Sicily. From these meetings came the proposal to establish a joint WFS Regional Resources Commission For Sicily, which held a meeting in Erice just prior to the International Seminars.
441
POLLUTION PMP ORGANIZED ACTIVITIES AT THE AUGUST 2003 ERICE MEETING Workshop on “Monitoring and Stewardship of Legacy Nuclear and Hazardous Waste Sites” Session on “Water Conflicts in the Middle East” Participation in meeting on WFS Regional Resources Commission For Sicily Workshop on “Monitoring and Stewardship of Legacy Nuclear and Hazardous Waste Sites:” A two-day workshop on the long-term stewardship of radioactive and chemical contamination of soils and groundwater was held following the International Seminars. It included talks on the U.S. program to stabilize and monitor the Department of Energy sites in the US., which are contaminated with radioactivity. It also included talks on other international sites, including nuclear sites of the US., Soviet Union and France, such as the Nevada Test Site, Semipalatinsk, Mayak, French Polynesia, etc. Four themes were addressed: (1) Contamination, Containment and Control; (2) Monitoring and Sensors; (3) Decision Making and Institutional Performance; (4) Safety Systems and Institutional Controls. The Workshop Directors were Steven Kowall of the U.S. Idaho National Environmental and Engineering Lab, and Lome Everett of Stone and Webster Co. A final report containing conclusions and recommendations for future governmental actions will be produced. Session on “Water Conflicts in the Middle East” A special session was held on water management issues in the Middle East. Transboundary water issues are a very contentious issue in the Middle East, as well as in many other regions around the globe. The scientific topics included what is known about the subsurface water aquifers in the Middle East, the availability of surface waters, irrigation issues, issues concerning recharging the Dead Sea, issues concerning the Jordan River, technologies for producing drinking water, and technologies for cleaning up contaminated aquifers. The session organizers were Andy Tompson and Richard Ragaini of Lawrence Livermore National Lab. Meeting on WFS Regional Resources Commission For Sicily In June, Richard Ragaini attended planning meetings in Rome to discuss environmental problems in Italy and in Sicily. From these meetings came the proposal to establish a joint WFS Regional Resources Commission For Sicily, which held a meeting prior to the International Seminars. This Commission will be made up of representatives from regional Universities, from regional civil agencies and from various WFS PMPs, which are organized into several task forces. Members of the Pollution PMP along with members of the PMPs on Water, Floods, Climate and Extreme Weather Events are members of Task Force A, which is considering several potential problems, including:
442 A. Meteorology and hydrology. Flash flood forecasting for small basins. Weather radar and automatic remotely-sensed precipitation gages Pilot project for local flash flood protection in a small catchment B. Landslides. C. Ground water pollution and remediation. D. Coastal zone problems, including sewage, chemical and oil spills, heavy metals, coastal erosion, and algal blooms. E. Soil salinity and saline agriculture. F. Increased water supply for drought-affected areas of central Sicily, including water-harvesting techniques. G. Establishment of a one-year graduate-level certificate program at one (or more) Sicilian Universities for Ministry personnel, including engineers, water resources planners, environmental specialists, etc. to foster interdisciplinary resource development and to improve communications among academicians and practicing professionals.
REPORT OF THE ENERGY PERMANENT MONITORING PANEL RICHARD WILSON Mallinckrodt Research Professor of Physics Harvard University, Boston, USA INTRODUCTION AND SUMMARY Last year the PMP agreed on a focus or theme for studies by PMP members in the year 200312003: "Lack of energy in developing countries and regions is a Planetary Emergency". One of the basic References is a report of the international energy agency (IEA) on "Energy and Poverty" as part of World Energy Outlook 2002. We do not believe that we have exhausted this topic so we will continue this theme for next year. This report will discuss first the progress and discussions that have taken place, followed by some discussion of other matters. There were many e-mail exchanges and reports presented at the PMP meeting in Erice on August 19'h, at the plenary meeting on August 22"d, and over many Sicilian meals and bottles of wine. I will take my task as Chairman to include an attempt to put all these thoughts into perspective and that will inevitably be biased by my own views. However I have started a web page for the PMP which is now accessible with the domain name httD:/lenerwvmu.org on which the original reports to the PMP or the main conference, papers, references and comments may be posted. Most religions insist that the rich help the poor because it is the right thing to do: others argue that it is correct pragmatically to achieve prosperity oneself. The PMP merely address how and when to do so. One of the problems we faced first is that the PMP members are almost all from rich countries. Yet the needs of the poor countries are brobably) best understood by representatives of the poor countries. For that reason, we tried to get scientists from these poor countries to join us, both at the special PMP meeting on August 19thand for a plenary session on August 22"d. In this we had limited success. It was our colleague P.K. Iyengar from India who proposed our focus and theme. But he has had a recurrence of heart problems and his physician tells him not to travel. The Chairman tried to find a good substitute, but the substitute was out of India and the Italian consulate in the US told him to apply for his visa in India! Hopefully this ridiculous behaviour of the Italian visa authorities will change. Jose Goldemberg from Brazil is a leader at thinking about helping the third world. He was at the International School on Energetics on "Energy Demand and Efficient Use" in Erice in 1980 and would have been delighted to return. But (so he told me) his visit to Erice, where he lectured on "Energy Problems in the Third World", persuaded him to go into politics and his political duties prevent him from coming. (Note that his lecture is available in the Erice volume of the school published in 1991.) Daniel Kammen from the University of California at Berkeley has done a lot of work in helping developing countries, particularly Kenya, to use renewables. His experience and wisdom on how to surmount the traps and pitfalls in working with developing peoples would have been invaluable to us. But a high fever that developed at the last moment prevented him from coming. We are very fortunate to have with us Dr. Hisham Khatib from Amman in Jordan, who is Honorary Vice President of the World Energy Council. Yesterday, August 22"d, he gave us a fine talk on the subject.
443
444 I had invited Dr. Xiao Dadi of China, because of China's remarkable success in energy development. He had to cancel at the last moment, but Dr. Mark Levine of Lawrence Berkeley Laboratory valiantly stepped into the breach and told us yesterday, August 22"d, of China's remarkable achievements. Dr. Adnan Shihab-Eldin from Kuwait is not from a developing country but is from a third world country. He has also been at the School of Energetics in Erice 23 years ago, and would love to come back. Indeed he talked on "Energy Needs of the Less Developed Countries". But he is now Research Director of OPEC, and at this moment has special duties which prevent him coming. I hope that the explicit reason will be made clear to all at the forthcoming OPEC meeting at the beginning of September. In addition to Dr. Mark Levine, two Americans on the PMP have reported to us on end use efficiency: Dr. Arthur Rosenfeld of California Energy Committee and Dr. William Fulkerson. Their full reports to the PMP meeting are on the website, but some of the message they bring is incorporated below. We also had a report in the August 19" PMP meeting by Joseph Chahoud on Syria's Energy Plans and from Dr. Diop of Senegal (who reported for Dr. Vivargent in Dr. Vivargent's regrettable absence). The abstracts are also listed below with the full reports on the website The Chairman cannot resist quoting from memory an American, Benjamin Franklin, who was interested in this subject 225 years ago. "Wherever I have travelled, I find that when men have neither coal, nor wood, nor turf, they live in miserable hovels and have nothing comfortable about them. But when they have an adequate supply of fuel, and the wit to use it wisely, they are well supplied with necessaries and live comfortable lives". I note the two phrases of italics which would now be phrased: energy supply and end use eficiency. Both are important Dr. Khatib told us of the great need for energy, what it has done so far for the third world and what it can continue to do. This emphasized the supply side. Dr. Rosenfeld emphasized, with his example of refrigerators, how the US has increased end use efficiency in the last 20 years. Dr. Levine yesterday also emphasized the end use side in the Chinese economy. Dr. Fulkerson went further and set out a model whereby developed countries can take positive action to help developing countries in improving end use. The PMP will certainly be discussing this further over the next year. The discussions have mostly centred around electricity use. This is because electricity is extraordinarily useful and as such is often the symbol for energy. Over the years, many authors have identified three stages in improving the lives of people by electricity use. Firstly, when there is enough electricity for a 60-watt light bulb (20-watt with a modem efficient one) so that the family does not have to limit activities to daylight hours. Next, when there is enough electricity to allow the use of small hand tools. I wonder whether at this stage we should now include enough electricity to obtain internet access and learn and teach the world. Thirdly, when there is enough electricity for refrigerators, electric stoves, heaters, television sets "and all that jazz". There are in the world 1.6 billion people without electricity. This is half the Indian population. No wonder Iyengar is interested! Figure 1 (figure 13.5 from the IEA Energy and Poverty report) describes a link between Poverty and Electricity Access. Electricity access seems to be related to the fraction of the population that
445 lives on less than $2 per day. A physicist knows how to draw a straight line on such a graph and link the two. But even physicists realize that they don't know which comes first - the chicken or the egg. Does poverty cause a lack of electricity access? Or is the lack of electricity a cause of poverty? The PMP does not know, but are devoting themselves to trying to help access to electricity with the full realization that it is only a part of the problem. ENERGY AND GDP People have looked at the relationship between Energy and Gross Domestic Product (often called Gross National Product in the USA) for many years. It is important, when discussing developing countries, to realize that Energy means "Commercial Energy" and not the wood and turf (peat) (now called renewables) collected by poorer peoples. Figure 2 shows one such relationship for years up to 1980, with projections beyond, fiom a IIASA report of the 1980s. The "conventional wisdom" before 1975 was that energy demand was directly proportional to GDP, that Energy Demand would rise steadily with GDP and that a failure to meet this demand would result in a failure for GDP to rise. Indeed, in the 1960s, President John F. Kennedy called for a cheap energy policy to help developing countries. Oil prices and electricity prices were falling and were expected to go on falling. Few people (I was a crazy exception) would make investments in end use efficiency, even with a 5-year estimated pay-back, when in 5 years the price would go down. But in 1975 this changed and end use efficiency became important. The emphasis is now on the relationship of the ratio E/GDP with time or with GDP. It was higher in developed countries and fell with time, with a plateau in the 1960s. It has fallen in the US since then largely due to energy efficiency improvements, some of which were discussed by Rosenfeld and Levine. This is reflected in the IIASA projections. The Chairman has not seen this curve with the more recent data superimposed. One notable point pointed out by IAASA is that USSR (line I1 SUEE on the graph) had values of E/GDP which were 1 54 to 2 times those of USA or Western Europe. This was widely attributed to a failure of "Centrally Controlled Economies" as opposed to "Market Oriented Economies". China seemed to be going along this path. The chairman notes that the improvement in E/GDP in China since 1980 is still a result of "Central Control". But a more intelligent "Central Control". Those who fear the "Big Brother" of George Orwell suggest that the mechanism by which China achieved this outstanding success may not be universally applicable. HOW CAN THE WORLD HELP? How can the world help the developing countries and regions to develop, and in particular to make electricity and energy universally available? Can such help enable the developing countries to keep the E/GDP low and avoid the path taken by developed countries where E/GDP was high? Fulkerson, Levine and Rosenfeld insist that if we can do this, we, and they, save money AND limit adverse environmental impact. Without taking a poll of the PMP it seems that individual members are in full agreement. This is the point in the argument where the Chairman regrets most the absence of P.K. Iyengar and others in third world countries. P.K. briefly described his views in e-mails. The Indian Government notes the apparent connection between poverty and
446 availability of electricity and wants to expand availability and quantity of electricity. The statistics are interesting. In 1947 when the British rule ended, India generated only 5,000 Mwe. In 2003, India generates about 100,000 Mwe. This is still only a few % of US use, with a much larger population and amounts to only about 100 We per person. P.K. argues that a technical base exists in India for a rapid expansion of electricity production. There are experienced personnel for operation. Most hardware is now made in India. But, so P.K. says, capital formation is the limit. What does this mean? The phrase is used in many ways. We believe that it means that India has no way of charging enough for electricity to get back the initial capital investment. There are political constraints. India presently generates 35% of its electricity from hydropower, and plans a 70,000 MWe increase in 10 years. They have plenty of coal and lignite, but this is of poor quality. This suggests to us that help for efficient electricity generation using conversion to gas with combined cycle generation may be important. P.K. Iyengar notes that the Indian Nuclear Power programme is now "mature". But he grumbles at the lack of help from IAEA. He suggests (predicts) that in the absence of competition from the "west", Russia and China will dominate the nuclear power market. The NonNPT countries (India, Pakistan and Israel) are not helped by the weapons states to develop nuclear energy. P.K. also comments that there is too much attention to safety, to non-proliferation and so on. Such complaints have also been made, particularly of over regulation, in the west. The Chairman expects this lack of help will continue in view of the strong opinions about WMD from the developed world and in particular the weapons states. Dan Kammen, who as noted above could not be present, is a well known proponent of the view that developing countries can be taught to use renewable sources of energy more effectively than in the past. He has shown that people can be taught to use solar ovens for cooking, albeit with some problems of acceptance, and also suggests that photovoltaic (PV) electricity can be a vital source, particularly to generate that first 60(20) watts that is so essential. PV can continue to expand, certainly until an electricity grid arrives. Kammen's points tend to be forgotten when E/GDP is plotted. E usually means "commercial energy" and leaves out wood and turf, and renewables used at the local level. The Chairman felt that there was a general consensus in the PMP to go beyond the traditional aid that the World Bank and US AID have given to developing countries in massive generation projects - of which the Three Gorges in China is perhaps the most obvious example (although not funded by the World Bank). Generation efficiency, Use of Renewables and End Use Efficiency must also be brought into play. This is harder. Unlike a big dam, it cannot just be put in position by a few engineers from "above", whether foreigners, or experts from the same nation. It needs many people at all levels in the developing countries who are educated in these matters. Mark Levine this pointed out in his talk, and is proud of the education LBL has given and continues to give to those from developing countries. As the PMP develops its ideas further, this may well be a point that an "Erice Statement" and perhaps an "Erice Conference" may be helpful. THREE LOGICAL PROBLEMS In a setting where Dirac and Wigner have expressed their views, it seems desirable that attempts be made to relate one's recommendations to fundamental
447
principles. Mathematicians might insist that one distinguish clearly between dependent and independent variables. Economists might argue that energy is only an "intermediate good". Indeed Admiral Zumwalt declared in 1973, in words apposite in 2003, that the purpose of nuclear energy is to power US nuclear aircraft carriers and submarines to defend the oil supply lines from the Middle East! Adnan Shihab Eldin pointed out in a comment circulated before the meeting (and now posted on the website), that automatically calling expanded oil use bad is illogical. With carbon sequestration what appears to be bad, may become good. Without clarity of understanding we might not have a clear, sustainable policy. Cost Benefit Analysis (and Risk Benefit Analysis) can be used easily to make decisions on the most cost effective strategy, with a given technology. It is far harder to use cost benefit analysis to discuss the benefits of a new unknown technology. It is, in the US, conventional to talk about a "Market Economy" with little realization of what that means. But a "Market Economy" is supposed to address the problem by providing an incentive for an enterpreneur to make money by inventing a new technology, patenting it, and reaping millions of dollars. There are many examples where this has proved inadequate and government action has been used in what is called "technology forcing". Just to show Arthur Rosenfeld that he is not alone in requesting government action to persuade people to make money by using energy efficiently, the Chairman lists 2 other situations: (i) It has been clear since 1910 that the benefit in medical diagnosis of X rays far exceeds the hazards. Few people took care to reduce the hazard even though it was possible, and even easy. The X ray dose for a chest X ray 50 years later was still 900 millRem, but was reduced, by enactment and enforcement of standards, within a few years by a factor of 100. (ii) Although the fuel efficiency of automobiles can save the buyer money, the improvement in kilometres per gallon only came about with the adoption of a complex system of CAFE standards federally enforced upon the manufacturer. The use of a " Market Economy" to get the most "bang for the buck" cannot work to get the best policy if: (1) Externalities are not included; (2) Consumers have inadequate information (including price): (3) Societies do not find it politically possible to charge full price. There is inadequate agreement on costing these externalities which have been and will be extensively discussed at Erice such as: global warming, pollution, energy resource exhaustion, nuclear proliferation, waste generation and disposal, etc. Societies have struggled to make sensible decisions when the full analytic procedure including externalities has not been accepted. Should one in the (supposed) spirit of Dirac and Wigner try to logically relate these paths? The PMP has not yet addressed these problems in the formal way that the Chairman and at least one other member would wish. They remain problems for next year when the Chairman intends to hold the noses of the PMP to the grindstone. OTHER ENERGY (FUEL) SOURCES While it is not directly connected with help to developing countries, we had reports on the status of fusion power. It was agreed that development is far off - 50 years at minimum, but they interested the PMP in the context of R and D funding noted below. Similarly, Dr. Bob van de Zwaa.n talked about nuclear energy and the conclusions that seem important for the theme of the PMP are that:
448
There is adequate fuel supply for the once through cycle for the foreseeable future (50 years). Recycle in a light water reactor is more expensive than once through. A breeder reactor is more expensive still. While development of a breeder reactor might be a sensible long-term approach for a developed country, it seems to have no place in the plans for a developing one. R. AND D. FUNDING The PMP completely agreed with the report of one of its members, Dr. Bruce Stram, that the scientific community has failed to communicate effectively an appropriate valuation of energy R and D. But the PMP as a whole has not discussed what, if anything, to do about it. For example, how does one compare short-term needs and long-term needs? Fusion is an extreme example. Whether it will ever work or be economical is unclear. If it works, it is extraordinarily attractive. But it has no short-term potential. It cannot help developing countries (yet). But individual PMP members feel it is underfunded, particularly in the US. Adequate (international) fimding for fusion might include: Enough funds to build ITER. Enough funds to keep one smaller machine (JET) going till ITER is finished. APPENDICES Avvendix 1: Summary of the fusion session in the Energy PMP Erice, 19 August 2003. Prof. Palumbo, who was the director of the EU fusion programme for over 25 years, exposed his latest developments on determining optimised magnetic configurations. Starting from first principles, he demonstrates that possibly only one magnetic configuration is able to stably confine a plasma. The work needs further developments and if the first findings can be confirmed, experiments could be initiated to verify the proposal. Jef Ongena summarized recent progress in magnetic fusion research in Europe, mainly on JET (Oxford, UK), in preparation of ITER, the next step device after JET. He also summarized the present status of the ITER negociations. 1. JET has recently made several important steps towards ITER. A. The baseline operational mode (the so-called ELMy H-Mode), has been further optimized by adapting the magnetic configuration towards higher triangularity. This results in a drastic increase in the plasma density (30%), without losing, but quite on the contrary, further increasing simultaneously the confinement time (10%). Also for the so-called operational modes, progress has been obtained, by optimizing the temperature profile (maximum closer to the optimal bum temperature, i.e. 200 million degrees, accompanied by an increase in size of the high temperature zone) and a further increase in the density. Both in the ELMy H-Mode and in the advanced modes, this has led to a drastic increase in the fusion reactivity of the plasmas obtained.
449 Mitigation of heat loads on the first wall by creating a radiating boundary with well-dosed injection of impurity gas (Ar).This mimics the chromosphere of the sun: a hot centre accompanied by a colder plasma edge. The reduction of the temperature in the edge leads to a drastic lowering of the first wall temperature in JET (from lOOOC to 200C) and will lead to a reduction of wall damage due to erosion, sputtering and sublimation. C. Increasing the pulse length of the fusion pulses. In JET, we have been able to run pulses in the divertor configuration up to 50s long. The divertor configuration is an elongated elliptical plasma cross-section, with open field lines at the plasma edge, in order to pump away impurities, and is the configuration foreseen for ITER. These JET pulses are the longest divertor discharges ever produced in a tokamak, and there is potential for even longer pulses in this configuration, well over a minute. This will allow the effect of long time wall and plasma constants to be studied, in preparation for ITER. 2. On Tore Supra, a French tokamak with super-conducting coils, pulse lengths have been obtained up to 4 min 25s, in the limiter configuration (circular plasma cross-section). This is nearly half the pulse length as foreseen for ITER starting phase (500s). This has been obtained by applying noninductive plasma generation by means of the Lower Hybrid Heating System (GHz e.m. waves) and spontaneous generation of plasma current by the socalled ‘bootstrap’ current. 3. Status of ITER negotiations and plans. The ITER collaborative effort has recently been extended from the initial 4 to 7 partners: Europe, Japan, Canada, Russian Federation, China, South Korea and USA. There are actually 4 sites proposed for ITER: Canada (Clarington near Toronto), France (Cadarache near Marseille), Spain (Vandellos near Barcelona) and Japan (Rokkasho in N-Japan). All sites have been assessed by a specialized team and the final decision on siting for ITER is expected in the first half of 2004. A final international decision is expected in the first half of 2004 and, once this decision taken, the construction of ITER will start, foreseen to take about 10 years. First plasmas on ITER are thus to be expected in 2014. JET can play an important role in optimising and accelerating the high performance phase of ITER. In addition, JET would thus allow (i) to maintain the advanced know-how in plasma physics needed to efficiently run a large tokamak like ITER, and (ii) to prepare a young and well experienced international team ready to start ITER operations. Prof. Miyahara (former director of the National Institute for Fusion Science, Nagoya, Japan) gave an overview of progress on fusion in Japan and the position of Japan with respect to ITER. His conclusions are as follows: 1. ITER is a nice project situated between the large tokamaks (TFTR in the USA, JT-60 in Japan and JET in Europe) and a real Thermonuclear Reactor. However, it requires large budgetary allocations in Japan to cover the cost of building the device and for plasma operations of ITER. This results in a reduced budgetary attention for other important work in fusion, as the study of Helical Systems and the behaviour of Tritium in Fusion Reactor Materials. Prof. Miyahara expresses his womes that this budgetary conflict will introduce serious difficulties for the future sound developments towards fusion reactors. B.
450
2.
On the theory side, there are very important recent developments in the understanding of plasma physics, as recently documented by Dr.. Kr Itoh et al., in their review paper on “Theory of Plasma Turbulence and Structural Formation - Non linearity and Statistical View - J.Plasma Fusion Res. Vol 79, No 6 (2003) pp 608-624). According to their opinion, the subjects described in this article are useful for ITER operations and the progress in the understanding of turbulence and formation of turbulent structures in plasmas illustrates the advancement of plasma physics as an important branch of modem physics.
Appendix 2: Energv Situation in West Africa Dr.. Mbareck Diop for Marcel Vivargent General Figures on ECOWAS. Economic Community of West African States (ECOWAS) is composed of 16 states (Benin, Burkina Faso, Cape Verde, Cote D’Ivoire, Gambia, Ghana, Guinea, Guinea-Bissau, Liberia, Mali, Mauritania, Niger, Nigeria, Senegal, Sierra Leone, Togo). The regional population is estimated at 246.8 mil; Nigeria has 123 mil. Per Capita GDP is $306/yr. 2. Energy Overview: Nigeria is the region’s only net energy exporter, and its exports are enough to make the whole region a net exporter. In 2001, the region consumed 1.46 quadrillion btu (Quad) and produced 5.4 quad. Nigeria consumed .92 quad and produced 5.49 quad. Commercial energy resources in the region are primarily oil and natural gas, and are concentrated in coastal and offshore regions. Electricity is provided by thermal (58.8%) and hydro (41.2%). Natural gas has the potential to take a more significant role in the region’s energy sector as fields in Nigeria, Cote d’Ivoire and Senegal are developed. Due to the region’s relatively small urban population (33.9%) and the lack of infrastructure, access to commercial energy sources is limited. 2.1 Petroleum: Nigeria, West Africa’s only significant oil producer, lifted off 2.1 mil bbl per d in 2002, and has reserves of 31.5 bil bbl. This is 96% of the region’s reserves. Smaller reserves are located in the Gulf of Guinea, in the Atlantic (offshore Mauritania and Senegal), and in landlocked Niger. 2.2 Natural Gas: There are significant reserves of natural gas in West Africa. Field discoveries have been confirmed and reserves proven in Benin (43 BCF) Cote d’Ivoire (1.1 TCF), Ghana (840 BCF), Nigeria (124 TCF) and Senegal (106 BCF). West Africa contains approximately 32% of Africa’s natural gas reserves. Nigeria lacks gas infrastructure and flares 75% of the gas it produces. In has a $3.8 bil LNG facility on Bonny Island, completed in 1999. The West African Gas Pipeline (WAGP) project is a 630-mile facility throughout West Africa, from Nigeria to Benin, Togo and Ghana. The $500 mil WAGP will initially transport 120 mmcBd of gas to Ghana, Benin and Togo beginning in June 2005. Gas deliveries are expected to increase to 150 mmcBd in 2007, 210 mmcBd in 7 years and be at 400 m m d d when the pipeline operates at full capacity 15 years after construction. The pipeline is expected to save $500mil in energy costs for Benin, Ghana and Togo over 20 years (World Bank estimate), will foster associated industrial development and permit electric power development ($600 mil in development is expected for new and renovated power 1.
451
facilities). The WAGP may be extended to included Cote d’Ivoire and Senegal. 2.3 Electricity: West Africa’s total installed electric generating capacity was 9.4 gigawatts (GW) in 2001. Generation was 33.8 bil kwh; Nigeria (14.6 bkwh), Ghana (8.8 bkwh) Cote d’Ivoire (3.0 bkwh) and Senegal (1.4bkwh) were the largest consumers. In 2000, 14 ECOWAS members signed an agreement to create a project to boost power supply in the region. The West African Power Pool (WAPP) Agreement reaffirmed the decision to develop energy production facilities and interconnect their power grids. According to the agreement, WAPP will be accomplished in two phases and completed by 2005. 2.4 Development of River Basins in West Africa. 2.4.1Niger Basin Authority (NBA): the long term objective of NBA is to promote cooperation among the member countries and to ensure integrated development of the basin in all sectors through development of resources, notably in the fields of energy, water, agriculture, livestock, fishing, fishfarming, forestry, transport and communications and industry. 2.4.2 Organization for Development of the River Gambia (OMVG): The objectives of OMVG are: i) increasing power generation at competitive costs, ii) Placing emphasis on development of agriculture; iii) ensuring optimum management of natural resources of the three basins (KolibdCorubal, KayambdGeba, and Gambia). 2.4.3 Organization for the Development of the River Senegal (OMVS): OMVS was mandated to implement an infrastructure programme for regulation of the river, including anti-salt protection, river transport, and power generation, and to contribute to integrated sectoral development in agriculture, transport and health fields in the basin area. The anti-salt dam was built in 1986 in Diama, and the Ranantali dam was completed in 1988, with a hydropower station generating since 2001 800 G W y r shared by Mali (42%), Senegal (33%) and Mauritania (15%). 2.4.4 Conclusion: Globally, West Africa has an important energy potential, but Nigeria is the main producer and exporter of petroleum and gas. The WAGP will be a good step toward a more integrated energy system involving Nigeria, Benin, Togo, Ghana and Senegal. The challenge is to make more interconnections between Nigerian and other West Afiican countries that suffer from energy shortages. Hydropower is located only in Ghana and the OMVS. High Hydro potential exists in the Cote d’Ivoire, in the Niger Basin, and the Gambia Basin, but needs to be developed and to be inter-connected in the next decade. Appendix 3A: A Global Programme for Energy Eficiencv in Developing Countries. Mark D. Levine, Lawrence Berkeley Laboratory Little progress has been made in the transfer of knowledge and market experience of industrialized countries to improve the energy efficiency of developing countries. This is a serious problem, for a variety of reasons. From the point of view of developing nations, energy efficiency can serve as an engine of industrial modernization. The diversion of capital resources into energy efficiency can enable investment in essential infrastructure and services, while maintaining or even
452 increasing energy services. Such investment in energy efficiency is also a highly costeffective way of improving the environment. There are clear benefits to industrialized countries as well. The developing world will dominate energy growth in the foreseeable future, assuming that their economies achieve continued growth (as is widely expected). This means that most of the pressure on global energy resources - with oil of particular concern - will come ftom the growth of demand in the developing world. It also means that most growth of carbon dioxide and other greenhouse gas emissions will be from the developing world. (IPPC results in the Special Report on Scenarios (2002) suggest that >80% of the increase in carbon dioxide emissions will come from developing countries over the next 50 to 100 years across a wide range of scenarios.) If such expansion of energy efficiency in developing countries is so desirable (to both the developing and industrialized world), why does it not just happen? The simple answer is that markets and associated governmental policy systems work poorly in most developing countries most of the time. As such, there is no way for the private sector in advanced countries, or such as it is in most developing countries, to make a profit from large-scale investments in energy efficiency. Unless a solution to this problem is found, the world will suffer. There will be no way to reduce the growth of greenhouse gas emissions significantly, as energy efficiency is the only affordable way to do this on a massive scale in developing countries. As such, no control of future greenhouse gas emissions is possible until greenhouse gas free energy supply is widely available (not soon). Further, in the opinion of the author, improving energy efficiency is essential if the developing world is to advance their economic performance for a long period of time (i.e. sustainably). This paper presents an approach to addressing this global problem on the scale that it deserves. The proposed programme would substitute energy efficiency for half of the energy demand growth in the developing world. As such, it both recognizes the need for continued growth on the supply side in the developing world, while also stressing the major role that energy efficiency must play. The proposed approach involves a major coordinated effort in eight of the ten most energy consuming developing nations, representing 75% of developing world energy consumption. This will cost $2B/year, to be allocated to (1) training, project management, and evaluation for energy efficiency projects and (2) institution building; development of prefeasibility studies for energy efficiency projects and programmes; and energy efficiency policy formulation and implementation in developing countries. The approach will create a new international centre of learning and training on energy efficiency with a "student body" of approximately 1000 participants from developing countries. The primary objective of the $2B/year global programme will be to attract $25B/year of private investment for energy efficiency. A programme such as proposed is essential for sustainable economic development in developing countries and control of greenhouse gas emissions in the coming decades. A small programme just $2B per year - if implemented properly - has the potential to make a large difference in addressing these problems. Avuendix 3B: Sustainable, Efficient Electricity Service for One Billion Peoule William Fulkerson and Mark Levine
Our purpose in this paper is to examine how electricity services can be brought to one billion people who currently have no access to such services. We postulate a 20-year goal, and further we require that the electricity should be sustainable with
453
respect to climate to attract support from the developed world. We try to answer the questions: What is needed? How much will it cost? Who might pay? How important is efficiency? We estimate that the customers for electricity will require of the order of 0.025kWlperson or about 220 kWpersordyear if end-use technology is efficient. We assume the developed world might be willing to pay the extra cost of sustainable generation and the extra cost of efficient end use technologies compared to least first cost technologies. We assume that sustainable electric generation will cost in the order of $lOOO/kW more than non-sustainable generation. Further, we assume that the extra cost of efficient end-use technology will be paid back in 2.5 years on average by the cost of electricity saved. The developed world would need to spend about $88/person, broken down into $50 for sustainable generation, and $33 for efficient end-use technology. The total cost to provide electricity to the un-electrified 1 billion would be $88 billion spread over 20 years, or $4.15B/y for 20 years, plus an estimated 15% more for training and institution development. This cost is about $12 billion less than for a system with sustainable generation but inefficient end-use technology. We suggest that these incremental costs of sustainable electricity are borne by four equal partners. The United States, The European Union, Japan and OPEC. Each partner would pay $1.19B/y. Consumers of the electricity would pay on average approximately $14/person per year for electricity plus about $15/person per person per year for end-use technology at the cost of the least first cost. This would provide a family of 6 with refrigeration, lighting, communications, TV and services for small motors like fans and sewing machines. There are serious questions involving this scheme. Can a utility or electricity service organization make money on this subsidized system? Can a poor rural family afford $29/person per year? Can a consortium of partners be persuaded to pay for sustainable service? Can sustainability be maintained for a long period of time? The authors suggest it is worth finding out the answers to these questions. Appendix 4: Status of the Hydrogen Economv: Does Hvdrogen Have a Practical Future as a Transportation Fuel? Carmen Difiglio, Ph.D., International Energy Agency
Mr. Carmen Difiglio showed that transport is largely responsible for world oil demand. Policies aimed at reducing problems from growing oil consumption therefore need to address motor-vehicle transportation. In the short-term, policies are needed to improve the efficiency of new vehicles, improve system efficiency and encourage the use of high-occupancy travel. But the inevitable high worldwide growth of motorvehicles use may eventually require a more sustainable transport system that features near-zero carbon emissions from secure sources of energy. To achieve this, there are now three known approaches: biofuels, electric vehicles and hydrogen-powered vehicles. Biofuels are important but are incapable of being supplied in sufficient quantity to replace petroleum in the transport sector. Past experiences with electric vehicles show that even significantly improved electric vehicles cannot be expected to meet consumer needs. Hydrogen is increasingly seen as the next generation of motor vehicle technology as evidenced by product development in the motor-vehicle industry and major new government programmes in the US, Japan and the European Union. Difiglio outlined the energy use and carbon emissions for several motor-vehicle and fuel technologies including hydrogen, electric, biofuels, hybrid and conventional
454
vehicles. He also provided 2020 cost estimates for several alternative technologies that can be used to produce hydrogen without C02 emissions including gas and coal with carbon sequestration, several renewable technologies, and nuclear power. Difiglio showed, using expected future cost estimates, that hybrid and fuel cell vehicles would be a costly way to reduce C02 emissions - two orders of magnitude higher than the economic incentives emerging from the Kyoto process. Several challenges facing a transition to hydrogen were outlined, including needed technology development on fuel cells, on-board hydrogen storage and hydrogen production approaches. Difiglio suggested that it would be difficult to supply the substantial quantities of hydrogen needed to displace a significant percentage of transport oil before 2050 unless carbon sequestration is applied on a large scale, since only fossil fuels could achieve this at a reasonable cost. Any feasible increase in renewable or nuclear electricity before 2050 would be best used to reduce COz emissions in the power sector. Cogeneration of hydrogen in a high-temperature gas reactor (HTGR) was shown to be a promising but uncertain technology. There would also be a difficult transition period in which there would be insufficient hydrogen refuelling available to inspire consumer confidence and insufficient hydrogen vehicles to make the investment in hydrogen refuelling equipment a reasonable business proposition. Substantial government intervention over a long period of time would be required to overcome this and other transition barriers. Nonetheless, increasing concern over global climate change could require that the future energy economy achieve extremely low net CO2 emissions. Widespread hydrogen use might be the only practical way to achieve this in the transport sector. Avuendix 5 Syria: Renewable Energv Master Plan 2001-201 1 Joseph Chahoud A secure and reliable supply of energy to the different sectors of the economy is one of the main concerns of the government of Syria, being aware of the finite and limited conventional resources available. For Syria to move towards greater sustainability, future energy developments must reduce expected GHG emissions; some reduction may be achieved through the use of renewable energy technologies. To this end, a Plan has been prepared in order to induce an increasing contribution fiom renewable energy sources in the national overall energy balance, thereby reducing dependence on fossil fuels and leading to environmentally sound and sustainable development. The Plan has two main components: an energy development programme and accompanying policy measures. Energy development is predicated on a series of proposals refemng to specific renewable energy technologies that fall into six categories: solar thermal, photovoltaic, wind, biomass, hydro, and hybrid systems. The accompanying policy measures are recommendations for Syrian institutions including the elimination of barriers to renewable energy such as subsidies to the conventional energy sector. Thus Syria will be more open to private investment in renewable energy. R & D, pilot projects, and bankable projects are three phases of the Plan, depending on the level of maturity and commercialisation of the respective technologies. The R & D programme consists of 20 components, most of which focus on application systems, and the others on system components. The overall cost of the R & D programme is estimated at $11 mil, two thirds of which goes to solar energy.
456 Pilot projects involving 12 renewable energy technologies and systems are proposed with an overall cost of $90 mil, 80% of which will go to biomass projects and 15% to wind. Finally, it is envisaged that 21 energy technologies and systems will reach commercial stage during the period. The financial resources that will be required for this stage will be about 1.36 billion, 44% wind, 22% for biomass, 18% for solar, 13% for hybrid, and only 3% for mini-hydro that have minimal environmental impact. By the end of the period of the Master Plan, the contribution of renewable energy technologies is estimated at more than 1 mil TOE, 4% of the total primary energy demand of Syria. Economic analysis, based on the life cycle cost of each project as compared to similar costs for corresponding baseline technologies, suggests that the renewable technologies will have an economic advantage as well as environmental and social benefits.
SYRIA’S RENEWABLE ENERGY MASTER PLAN: A MESSAGE FROM THE GOVERNMENT JOSEPH CHAHOUD Physics Department, University of Bologna, Bologna, Italy Syria enjoys both conventional and renewable energy resources. Currently, oil makes the largest contribution to the primary energy supply, followed by gaseous fuels. As a result of national strategy, the share of oil in the energy mix has been declining steadily in favour of gaseous fuels. (For example the share of oil in electricity generation in Syria is, as of the year 1998, only 40% while natural gas contributes about 45%). GHG emissions are, in terms of GDP, higher than the regional and global levels, and are projected to increase. For Syria to move towards greater sustainability, future energy developments must reduce expected GHG emissions, some of which may be achieved through the use of renewable energy technologies. A secure and reliable supply of energy to the different sectors of the economy is one of the main concerns of the Government, being aware of the finite and limited conventional energy target. Hence, the Government has had to adopt strategies and plans focusing on: Energy efficiency; 0 Linking grids with neighbouring countries; 0 Enhancement of the utilization of renewable energy resources; Institutional restructuring of the renewable energy sector. To achieve this purpose, a Renewable Energy Master Plan has been developed by the Ministry of Electricity under the guidance of the United Nations Department of Economic and Social Affairs (UNDESA) with local coordination undertaken by the United Nations Development Programme (UNDP) in Damascus. Successively, by mid-June 2003, and in order to implement this Master Plan, the Syrian Government has enacted legislation that establishes the National Centre for Energy Studies and Research. The strategic role of the Centre should involve renewable energy, energy efficiency and integrated resource planning programmes. It is the hope of the Government of Syria that the Master Plan will be of great value and will serve as a useful document to be referred to by all sectors in dealing with energy issues. The purpose of the Plan is to induce an increasing contribution of renewable energy sources in the national energy balance, thereby reducing dependence on fossil fuels and leading to environmentally sound and sustainable development. SUMMARY: This purpose becomes appropriate in the current global scenario where interest and investments in renewable energy technologies have grown significantly, and it is quite likely that they will supply an increasing share of global primary energy during the forthcoming decades. Technologies such as photovoltaic are witnessing sustained high annual growth rates that are driven by large-scale market development programmes. This increased development, primarily in Europe and Asia, results in large-scale manufacturing and
456
457
component development initiatives, of which Syria can take advantage to boost the development of its small PV industry. Another renewable energy technology that has witnessed large-scale development is wind energy where very large European developments are driving the market. Also solar thermal technologies, especially solar water heating (SWH), are considered mature with large markets. Relevant technologies such as solar heating, cooling, drying, and electricity generation need to be demonstrated in the Syrian context. The global trend in is towards developments in the form of mini hydro power plants whose relative environmental impacts are minimal. Such an option can be pursued in Syria through small hydro and canal drop schemes. The biological and thermo-chemical routes for biomass energy conversion could play an important role in the Syrian energy mix and needs to be pursued. The Syrian renewable energy industry is in the early stages of development but capacity exists in SWH, photovoltaic and wind energy. The industry is largely in the public sector. Several barriers exist currently to the path of renewable energy developments in Syria: subsidies to the conventional energy sector, dominance of the public sector, limited awareness of the benefits, lack of favourable policies, tariffs and incentives, underdeveloped human resources, inadequate official assistance and limited interface of the RD&D institutions. However, the biggest barrier was, until now, the absence of an organisational set-up that should act as a driving force with clear responsibility to develop policy, legislation and regulatory evolution. The Syria: Renewable Energy Master Plan has two main components: Energy development plan; Accompanying measures plan. Energy develoDment plan The energy development plan consists of a series of proposals and activities refemng to specific renewable energy technologies. These fall into six categories: Solar thermal; Photovoltaic; Wind energy; Bio-energy; Hydro energy; Hybrid systems. The proposed activities began in 2002 and will continue until the end of the M P period in 201 1. Depending upon the level of maturity and the commercial stature, the renewable energy technologies pass through more than one of the phases of the energy development plan, namely RD&D, Pilot Projects and Bankable Projects. In 2011, the final year of the M P , the contribution of renewable energy technologies is estimated at 1.012 ktoe, which will represent 4,3% of the primary energy demand. The share of different renewable energy technologies is shown in the following table:
458
Wind
Bio
50,23 %
25,84 %
Solar thermal 16,61 %
Hybrid*
Hydro
Photovoltaic
3,62 %
3,41 %
0,30 %
1. Research, Development and Demonstration The proposed RD&D programme consists of 20 components that will have to be implemented in the period 2002 to 201 1. Most of the tasks focus on application of renewable energy systems; however, some focus exclusively on the system Components.
RD&D Programme Solar Thermal Space Heating Systems Solar Thermal Space Cooling Systems Solar Dryers
Domestic Hot Water Systems
Solar Thermal non-domestic Hot Water Systems Solar process heat for industries Solar Thermal absorber R&D PV pumping for urban water supply PV Health and Education systems PV pumping systems
Programme Description
Demand assessment, engineering design study, design validation, testing and demonstration. Demand assessment, engineering design study, design validation, testing and demonstration. Market survey/assessmcnt, prototype development, demonstration, quality control, economic evaluation. Technology & policy review. Prototype development and testing. Design optimisation. Marketkost analysis and industrial development. Market prospects for Hamams, use of concentrating collectors.
Field survey, design, prototype testing, demonstration. R&D of absorber materials, designs and coating. Market survey, technology development, testing & demonstration. Demonstration of health and education sector systems totalling 8 kWp. Demonstration of PV pumping systems totalling 45 kWp.
Implementation Period
cost (US%)
2002-2007
300.000
2003-2005
350.000
2002-2006
400.000
2002-2004
450.000
2005-2007
200.000
2003-2004
1.500.000
2002-2004
250.000
2004-2008
350.000
2003-2004
150.000
2004-2006
350.000
459 Demonstration of professional application of PV totalling 200 kWp.
PV professional applications
2003-2005
2.000.000
55oror 55oror Share of R D W Resources The share of each of the 20 components of the RD&D programme in the 11.000.000 US$ funding is shown in the table below. Solar 67%
I
Hybrids 22%
I
Bio
Wind
5%
4%
Hydro 2%
Research should be carried out in order to apply the technology to Syrian conditions or to carry out additional technology developments. 2. Pilot Proiects Some of the renewable energy technologies and systems that have been included in the Master Plan, although proven in other locations, do not have any significant track record in Syria. Pilot projects involving these technologies are proposed in order to build confidence and prepare the industry and financiers for the commercial development stage. Private sector participation is envisaged in several of these pilot projects. The key features of the pilot project being proposed in the Master Plan are:
460 Pilot projects involving 12 renewable energy technologies and systems are proposed. The required financial resources will be about 90 US$ over the 10 years period of the Master Plan. Bio-energy technologies, which have not had a significant track record in Syria so far, will require the major share of the pilot project resources; followed by wind electricity generation and solar energy. Pilot Projects: share of resources Bio-energy Wind 81 % 15 % Pilot Project Plan Pilot Project Solar thermal space cooling systems Solar process heat for industries PV village electrification
Description 50 pilot systems in commercial & office spaces
1
(*I
Solar home systems for Bedouins PV-Diesel hybrids PV-wind Hybrids Briquetting and gasification Urban solid waste projects Small scale biogas systems Institutional biogas systems Grid connected wind electricity generators (**) Stand-alone wind electricity generators
Solar 4%
1
20 pilot systems in process industries 5 to 10 un-electrified villages with total household sizes of 300 to be provided with stand-alone PV systems; 6 villages with a centralized mini-grid 250 Bedouin households to be provided with SHS 10 systems to be deployed in selected locations 30 systems to be deployed in - selected locatibns 10 systems to be deployed in ago-processing industries Pilot waste-to-energy plant in Aleppo Piloting of 400 small family size systems 8 systems to be implemented in government cattle farms
Hybrid ---
Implementation Period
Cost (us$)
2006-2007
218.750
I 2005-2006
I
1
1.833.333
2004-2005
1,48,000
2004--2005
46667
2007-2008
1
95.000
2007-201 1
199.556
2008-2009 2003
175.000 19.166.667
2007-2008
39.728.571
2007-2008
13.242.857
5 MW wind farm
2003
4.750.000
10 MW of off-grid WEGs piloted
2004-2005
8.750.000
TOTAL
89.762.401
461 a
a a
a
a
3.
Conduct site-specific analyses of physical suitability, verification of solar and wind characteristics, dispersion of local population, etc., at selected sites; Develop a technical specification for product supply, installation and service, issue calls for tender to local and international suppliers; Install 2 systems inclusive of data logging systems and monitor performance, reliability and users receptivity over a 2-year period, Review existing wind energy assessment and feasibility studies and prepare a plan for further assessments using the latest technologies such as GIS, SODAR, Ultrasonic Anemometry; Identify most promising sites; Study tour of Syrian experts from various bodies to countries with relevant wind energy development; Feasibility studies to be camed out at the most promising sites (with at least one year data); Involve the private sector and identify resource mobilisation options; Develop at least one grid-connected wind farm which demonstrates different technologies: geared, gearless, pitch-regulated, three-bladed, twobladed, AC-DC-AC conversion systems, asynchronous and synchronous generators, different unit sizes: 300 kW, 500 kW, 750 kW, 1 MW. Bankable Proiects
Over time, several renewable energy technologies and applications will mature to a stage of being appropriate for further growth and development in a quasi or fully commercial framework. These are classified here as Bankable Projects. At this stage, little or no financial grants or subsidies are necessary to encourage market expansion. At this stage of commercial or near commercial phase, the market development will generally be driven by the industry as opposed to the government during the RD&D and pilot stages. Nevertheless, government should still play an enabling role by way of fiscal and financial incentives and policies to encourage market development in renewable energy. The key features of the Bankable Projects being proposed under the Master Plan are: It is envisaged that 21 renewable energy technologies & systems reach a commercial or quasi-commercial stage during the Master Plan period. The financial resources that will be required for the development of these technologies will be about 1,36 billion US$ over the 10 years. Wind energy and bio-energy technologies will require the major share of the resources. The earliest of the bankable projects start in 2003 and the latest in 2009, the bankable phases should continue beyond the Master Plan horizon. Key technologies, which will drive the commercial developments, will be wind electric generation, solar systems and biogas systems.
bankable project share of resources Wind 44 %
I
Bio-energy 22 %
I
Solar 18 %
Hybrid 13 %
Hydro 3%
462 Bankable Projects Plan Bankable Description of commercial Projects Developments proposed 1.600 systems in new buildings Solar thermal Space heating systems Solar thermal 1.550 systems in hotels, offices and Space cooling systems lar e houses Solar domestic hot 300.000 systems in houses and apartment blocks Solar non-domestic 800 systems in the commercial sector and light industries hot water systems Solar dryers 100 systems in agro-processing industries Solar process heat 730 systems in process industries PV village 3.700 stand-alone PV systems electrification 114 mini grid systems Solar home systems 4.750 Bedouin households to be for Bedouines provided with SHS PV pumping systems 100 systems for urban water supply 500 systems for micro-irrigation PV health and 500 health & education systems education systems PV professional 25 drinking water systems applications 200 street lighting systems
PV-Diesel hybrids Integrated solar combined cycle power plant Hydro power projects Urban solid waste Projects Small scale Biogas systems Institutional biogas systems Wind farms Stand-alone wind electric generation Wind pumps Dekosting wind machines
40 systems Integrated solar field of 30/40 MW in agas combined cycle power plant totalling 150 MW Small hydro developments totalling 48 MW and 10 canal drop schemes 3 projects possibly in Damascus, Homs and Hama 1.900 small biogas in f m s - systems and households 27 large biogas systems in government farms
90 MW of off-grid WEGs 200 systems in farms and communities 300 systems in farms TOTAL
I
start year 2008
I
2008
cost (US$) 6.194.444 5.972.222
2005 135-22-222 2008 4.305.556
1 I
1
2007
2.133.333
2007 2006
60.666.667 15.560.000
2006
1.772.000
2009 2007 2005
5.272.500
2006
2.308.333
2009 2008
1.179.167
I
1
260.000 180.000.000
2007
36.944.444
2004
52.500.000
2009
200.828.571
2009
I
47.571.429
2000 67,125,000 2006
4.172.711
2003
830.083
I 1.349.818.683
463 ENVIRONMENTAL PROTECTION The Syrian COz emissions in terms of GDP and energy consumption (per TOE) are higher than the global, continental and middle-eastem levels. These emissions are also expected to increase considerably if the current energy mix remains predominantly fossil fuel based. The Renewable Energy Master Plan will not stop or reverse this trend but will be able to make significant reductions. There will be a reduction of 2.600.000, 32.400, 17.200 and 26.300 tonneslyear of C02, SOX, NOx and CO respectively by the year 2011. The shares of different renewable energy technologies in GHG reduction are shown in the following table. Wind 50 %
Bio 26 %
Solar 17 %
Hybrid 4%
Hydro 3%
The societal costs avoided by these reductions are more than 351 million US$. This estimate is based on data from the Stockholm Environmental Institute that say: 32$, 10.473$, 636$ and 2.194$ for each tonne of COz, NOx, CO and SOX respectively. The emerging financing mechanisms such as Clean Development Mechanism (CDM), Joint Implementation (JI) and Tradable Green Certificate (TGC) could actually help internalise some of the external costs. This could be especially true in the case of C02 emissions reduced mitigation. However, the cost of Certified Emission Reduction (CER) and Emissions Reduction Units (ERU) are likely to be much below the societal costs (the likely costs for CERs are in the range 1-lO$/tonne). ECONOMIC ANALYSIS Detailed economic analysis was carried out for each of the energy technologies being considered under the Master Plan. Since a large number of technologies were being used to provide heat and power for a variety of applications, no single baseline was found suitable to cover all developments. Therefore, five different baselines were used to analyse all energy system components: Diesel water heaters were considered as baseline for technologies involving supply of hot water such as domestic and non-domestic solar water heating systems; Electricity was considered as a baseline for solar thermal technologies for space heating and cooling as well as for industrial process heat; Butane gas lamps were considered as baseline for lighting in rural and decentralised areas, this baseline was used also in solar home systems for the Bedouins; Gas based electricity generation and grid extension was used as the baseline in the analysis of wind electric generation, hydro electric, hybrid electric, solar thermal electric and waste-to-energypower plants; Gasoline generators were used as the baseline for off-grid professional applications of PV, PV pumping, health and education systems etc. Internationally accepted values were assumed for the lifetime, capital costs and operating costs of the systems. Inflation was assumed as 5% and the discount rate was taken as 9%. The cost of electricity and hydrocarbon fuels was taken from the current Syrian market prices. The following table gives the details of the economic analysis for each of the technologies.
464 Energy Technology
Solar thermal space heating systems Solar thermaI space cooling systems Solar hot water systems Solar dryers for agriculture Industrial process heat PV village electrification systems Solar electrification for Bedouins PV pumping systems PV professional systems PV health & education systems Wind electric generation Wind pumps Defrosting wind machines Hydro-power Hybrid systems Integrated Solar Combined Cycle Power plant Bio-gas systems Gasification plants Urban waste-to-energy plants TOTAL
Baseline
life cycle costs life cycle cost Million uss US$ milion of Baseline business Million US$
Electricity
11,60
13,13
Electricity
11,60
13,13
253,33
868,60
Diesel water Heaters Electricity Electricity Gas based electricity Butane gas lumps Gasoline generation Gasoline generation Gasoline generation Gas based electricity Gas based electricity Gas based electricity Gas based electricity Gas based electricity Gas based electricity Gas based electricity Gas based electricity Gas based electricity
4.00
601
142,09 33,75
175,25 59,16
375
421
9,88
15,33
4,32
5,05
2,68
2,25
1.507,68
2.827,86
10,32
1,46
2,05
19,96
91,38 4,34
I
209,04 3,35
409,22
187,88
564,60
791,17
0,33
1,13
115,53
403,41
3.190,70
5.637,14
Note: meeting the equivalent energy demand through the conventional route would involve initial investments of only 410 million US$ compared to the 1,48 billion US$ required to implement the Master Plan. However, the life cycle costs of the conventional option works out to 5,6 billion US$ compared to 3,2 billion for the renewable energy Master Plan. A sensitivity analysis, based on petroleum derivatives
465
price reductions by 10% steps, showed that the conventional hydrocarbon option becomes economically convenient only at not less than 70% price reduction. This affirms the economic attractiveness of the renewable energy option. ALTERNATIVE SCENARIOS Considering that the renewable energy Master Plan offers a clear, economically viable option compared to the hydrocarbon intensive baseline, besides the social and environmental benefits, and considering the current state of development trends and the interest and support of the Syrian government, two alternative scenarios are also presented. Accelerated growth scenario:
If there are larger resources available from the government, matched by the international development assistance community, and larger contributions from the industrial sector, more physical achievements can be made. For such a scenario to be realised, the accompanying measures need to be fast-tracked and there should be increased policy support, especially to encourage private sector investments. Some of the characteristics of this accelerated scenario are: A large increase in solar thermal and bio-energy achievements and increase in wind and hybrid systems; Faster completion of the accompanying measures, especially the institutional developmentshpgrades and the studiedmarket research; Major policy initiatives to mainstream renewable energy developments and increase private sector participation; The energy contribution from renewables increases to 6,73% of the primary energy demand in 201 1; The total cost of implementation increases to 2,4 billion US$ and the life cycle cost increases to 5,25 billion US$. The investment and life cycle costs of the conventional corresponding option are 668 million US$ and 8,93 billion US$ respectively. Focused mowth scenario In the case of constraints in resources and time owing to lower levels of support from the donors, as well as delays in implementing the accompanying measures, the Master Plan may need to focus on a set of high priority technologies and limit developments in others. A list of such high priority technologies to be focused upon is: Domestic and non-domestic solar hot water systems; PV village electrification - stand-alone and mini-grids; Solar home systems and lanterns for Bedouins; PV pumping systems for irrigation; PV health and education systems; Small and mini-hydro and canal drop hydro; Wind electricity generation grid connected and stand-alone. The energy contribution from the focused growth scenario will be 2,85% of the primary energy consumption in 201 1. The cost of implementation will be 81 1 million US$, and the major share will be borne by the private sector.
466
Energy contribution in 201 1 Total investment costs US$ Total life cycle costs US$ Emissions reduction
COz (tonneslyear) NOx (tonnedyear) CO (tonnedyear) SO2 (tonnedyear) Employment generation (units)
Renewable Energy Master Plan
Accelerated Growth Scenario
Focused Growth Scenario
4,31 % 1,48 billion
6,73 % 2,4 billion
2,85 % 845 million
3,2 billion
5,2 billion
1,9 billion
2.604.200 17.200 26.300 32.400 7.225
4.069.600 26.900 41.100 50.600 11.014
1.722.350 11.400 17.400 2 1.400 6.301
CONCLUSIONS The energy development under the Master Plan will be dominated by wind, bio-energy, solar and hybrid systems developments. The specific technologies which will make large contributions are wind. electric and solar water heating systems: The RD&D phase will be dominated by solar and hybrid systems while the pilot phase will be dominated by bio-energy and wind; The key technologies comprising of the bankable projects will be wind. bio-energy and solar thermal; The energy development plan will need a set of accompanying measures involving institution and capacity building, promotion and studies, uumading of higher education: Large-scale employment opportunities will be generated. Increasing the renewable energy contribution in the urimarv energy balance bringing significant social and environmental benefits.
467 ELECTRICITY WORLDWIDE Production amount
Capacity factor
3.300 GW
14,s E6 GW-hr
0,5 1
Distribution by source:
Fossil fuels 64,5%
Installed Capacity
Consumption KW-hr per capitaly 2.400
mucles 7.0%
hydro 18,5%
Installed capacity /capita: 0,54 kW Or 0,72 kW if we consider that 1,G E9 people have no access to electricity ELECTRICITY W G-8 COUNTRTES* Installed capacity GW
Population Million
1.750
850
Average installed capacity KW per capita 2,OG
Production
Consumption
GW-hr
KW-hr per capitdy
8 E6
9.300
* Canada, France, Germany, Italy, Japan, Russia, UK, USA. It should be noted that more than 50% of the installed capacity worldwide is in the G8 Countries. G-8 Countries also consume more than 50% of the worldwide electricity production. Consumption per capita goes from 4.500 kW/hr in Italy up to almost 18.000 k w h in Canada. WIND ENERGY CONTRIBUTION Installed power in wind farms:
Germany Spain Denmark Italy The Netherlands Total Europe
12.000 MW 5.000 MW 2.900 MW 785 MW 677 MW 562 MW 22.330 MW
USA India China Japan Total World
4.710 MW 1.700 MW 400 MW 350 MW 30.400 MW
U.K.
468
ELECTRICITY ACCESS IN 2000 - Regional Aggregates*
64,2 34,3 67,3 86,6 91,l
Population without electricity Million 1.644,5 1.634,2 522,3 i.o4i,4 55,s 14,7
Population with electricity Million 4.390,4 2.930,7 272,7 2.147,3 359,9 150,7
99,5
198
351,s
99,2
875
1.108,3
Electrification rate
World DC Africa Asia Latin America Middle East
Transition economies OECD**
% 12,8
* From IEA World Energy Outlook 2002. ** OECD figures aggregate some important regional variations. The electrification rate for Mexico and Turkey is about 95%. All other Member countries have 100% electrification. One comment: In more than 30 countries, totalling about 500 million population, the average electricity consumptiodcapitdy (residential and non) does not exceed the very modest figure of 50 kW-hr. Moreover, in some 45 countries, with a total population about 3.300 million, the electricity consumption is in the range 100 to 1.000 kWhdcapitdy, against the worldwide average of 2.400 kW/hr. Summing up, we could say that almost 4 out of 5 of the World population that have access to electricity suffer from a shortage of electricity. Situational Analysis of North Africa
Algeria, Egypt, Libya, Morocco, The Sudan, Tunisia
Population
Population projection
2002
2020
180 million
240 million
Average consumption KW-hdcapitdy
750
469 Situational Analysis of Middle East (Arab League + Iran) Group 1
Bahrain Kuwait Qatar 1 UAE Total
Population
Consumption GW-hrly
0,672 million 2,253 million 0,606 million 1 3.550 million 1 1 7,081 million I
5.775 30.515 8.200 31.390 75.860
Group 2 Population
I
Consumption GWhrly 112.690 10.672
I
Consumption ~w-hr/~ 9.010 19.500
23,370 million 2,522 million
Saudi Arabia _Oman Group 3
Population Lebanon Libya
Consumption Average consumption kW-hr per capitajy per ca$tdy 9.700 16.850 14.100 1 13.340 1 I 1 10.700 kW-hr
3,680 million 5,370 million
Consumption kWhr per capitdy 5.580 4.480
I
( Consumption kW-
/
I
hr per capitaly 2.920 3.655
55oror Group 5
Population Somalia The Sudan Yemen
1
7,760 million . , .million . .. . . . .. -. 37,100 19,500million 19,500 million
( Consumption GW- ( Consumption kW- ( I hr hi- per nnr capitdy r,a its/ hr/y
- -
1
278 1.345 .2.510
4
30 48 150
Note: average consumption per capita = 1.495 kW-hrly which amounts to almost 62% of the World average. Residential consumption amounts to almost 70 % of the total.
470
Distribution by source
55oror Syria The Sudan Tunisia UAE Yemen
6,000 0,606 2,290 5,710 0.810
86,4 29,4 99,2 100 100
13,6 70,6 0,s
-----
Note: globally almost 95 % of the electricity produced in the Region comes from burning fossil fuels.
STATUS OF NUCLEAR ENERGY
A. GAGARINSKI Russian Research Centre “Kurchatov Institute”, Moscow, Russia Below is a brief overview of the present-day status of the world’s nuclear power production, as well as the state of the world nuclear community’s opinions concerning the perspectives of nuclear energy’s contribution to a sustainable solution of the energy challenges faced by the mankind. The overview was prepared for the Energy Monitoring Panel at the 30thsession of the International Seminars on Nuclear War, Planetary Emergencies and Associated Events (Erice, August 18-24,2003). STATUS OF THE NUCLEAR POWER INDUSTRY For the 50 years of its existence, nuclear energy sources have reached the level of producing about 7% of primary energy consumed by mankind, thus outdistancing hydro-energy and considerably outstripping all the other renewable sources. Nuclear power plants are operating or being built in only 33 of the almost two hundred countries of the world. However, two-thirds of the planet’s population lives in these “nuclear” states. The United States and Russia initiated the peaceful use of nuclear energy and were later joined by other developed countries, which have accumulated considerable experience in the development, construction and operation of nuclear power facilities intended for various applications, along with nuclear fuel cycle enterprises. Just in the USA and Russia, the total number of nuclear units that have been built for land, sea and space applications is close to a thousand. Today “the baton was handed” to the developing world, including China and India (i.e., 40% of mankind), which has made the nuclear option an integral part of its sustainable development strategy for the XXI century (Table 1). According to the IAEA, by the end of 2002, the world operated the total of 441 nuclear power reactors (Table 2) with a total net capacity of 358.6 GW, with another 33 reactors under construction in 12 countries in the world (22 in Asia, 10 in Europe and 1 in the South America)’. Of 22 nuclear power units put into commercial operation in the last 5 years, 70% were built in Asian countries (Republic of Korea, China, India, Pakistan). Against the background of steadily developing major nuclear programs in the developing countries of Asia (Republic of Korea, China, India and others), and political declarations about the new positive nuclear power policies in Russia and the USA, some West-European countries (Sweden, Germany, Belgium) have announced a gradual phase-out of nuclear power - which today makes an important contribution in their electricity generation (30-60%). Two NPP units have already been closed for purely political, not technical, reasons (Barseback-1 in Sweden and Chernobyl-3 in Ukraine). In many countries nuclear power development plans face strong public opposition. On the whole, on the global scale, the world’s nuclear power industry has demonstrated steady growth in recent years (with a rate of about 3%, Table 3), exceeding the growth rates of the world power industry as a whole. However, nuclear contribution to world electricity production over the next 20 years is expected to decrease from today’s 17% to 15%.
471
472 Estimates of this situation made by the world’s nuclear specialists vary from “stagnation” to stable development, with the generally accepted thesis about the end of the “first nuclear era”. Here the presumption is that “the present phase of the nuclear power development has confirmed its viability. Key technical issues have been identified, the principal ways to solve these problems are already known, and after their realization in the current century, the start of the new phase of nuclear energy use - large-scale nuclear power development - will become possible””. STATE OF OPINIONS ON THE FUTURE OF NUCLEAR ENERGY Probably, the most concentrated representation of present-day opinions on nuclear energy’s future was made at the large International Conference of the IAEA, dedicated to innovative technologies for nuclear power and fuel cycle (Vienna, June 2003), which gathered leading specialists and nuclear program managers ffom 40 countries and 10 international organizations. The well-known arguments forming the basis for the supposition of the coming of the “second nuclear era” were stated at this meeting: Resource constraints and growing competition for non-renewable resources; Crisis factors related to highly uneven distribution of fossil fuels and dependence on unstable energy exporting regions; Global and regional environmental limitations; Nuclear energy’s ability to become a factor of stability for economic development, as well as an environmentally acceptable part of the energy option. At the same time, the Conference summed up the first results of perhaps the most important event of the last period in the field of forming the nuclear future “transition from words to deeds”, i.e., organization and development of the two major international programs on new nuclear power systems. Having started at the beginning of 2000, the project of the International Forum “Generation-IV” (GIF) has united ten developed and developing, but “nuclearadvanced” countries (Argentina, Brazil, Canada, France, Japan, Republic of Korea, South Africa, Switzerland, UK, USA), which intend to propose to the world community new nuclear power systems suitable for commercial operation by 2030. GIF is focused on advanced nuclear power technologies; it has already selected (after considering over 100 concepts selected internationally) six systems, which were considered to be the most promising (gas-, sodium- and lead-cooled fast neutron reactors, molten salt system and thermal reactors: water-cooled reactor with supercritical parameters and gas-cooled high-temperature reactor), and launched the research on four of the six selected concepts. The IAEA’s International Project INPRO, launched a year later, has the purpose of uniting not only the countries possessing nuclear technologies, but also potential nuclear power consumers. 15,European, Asian and South American countries have become members of INPRO”’. The goal of INPRO is to identify the necessary national and international activities needed to ensure the important contribution of nuclear energy in sustainable satisfaction of the energy demand in the XXI century. Consequently, INF’RO from the very beginning is oriented towards the interests of the entire world community, including less developed states. It includes the analysis of international-dimension issues: fuel and NPP leasing, international nuclear fuel cycle centers, etc.
473
In its first stage, which was completed in the middle of 2003, the system of criteria for comparing and selecting innovative nuclear power system concepts has been developed for the following topical areas: Resources, demand and economy; Environment, fuel cycle and waste; Safety; Non-proliferation; Cross-cutting requirements. It should be noted that this orientation of INPRO has made it possible for Acad. E. Velikhov to call the project “a navigator in the turbulent world”. In its energy demand analysis, INPRO refers to the Special Report on Emission Scenarios (SRES) commissioned by the Intergovernmental Panel of Climate Change (IPCC) and published in 2000. The SRES presents 40 reference scenarios, grouped according to four storyline families, extending to 2100. Global primary energy grows between a factor of 1.7 and 3.7 from 2000 to 2050, with a median increase by a factor of 2.5. Electricity demand grows almost 8-fold in the high economic growth scenarios and more than doubles in the more conservational scenarios at the low end of the range. The median increase is by a factor of 4.7. Moreover, nuclear energy plays a significant role in nearly all the 40 SRES scenarios. The next phase in the INPRO development will consist of analysis of the specific nuclear power systems for satisfying the identified criteria and user requirements. The following countries “volunteered” to propose their designs: Argentina (small NPP with PWR for electricity production and desalination), India (advanced heavy-water reactor with thorium cycle) and Russia (sodium-cooled fast neutron reactor). Returning to the results of the IAEA’s “innovative” conference, it should be noted that, despite the comprehensive overview of multiple concepts and the variety of opinions of the nuclear future, several common positions shared by practically all the world community have been identified. They include: 0 The perspective of the need to close the fuel cycle, which would eliminate resource constraints, but raise the problem of constructing international nuclear fuel cycle centers; 0 The need for wide involvement of nuclear energy in non-electrical applications, and, first of all, in hydrogen (Table 4) and potable water production; 0 A wide capacity range of nuclear facilities (including small and medium capacity systems), to adequately satisfy national and regional demands; A key role in proliferation resistance by involving new countries in the sphere of peaceful use of nuclear energy and avoiding misuse of this sphere for the purpose of producing nuclear weapons by applying intrinsic (technical design) and extrinsic (states’ decisions and undertakings) measures; The growing role of governments in adopting long-term strategic energy decisions, forming policies and investment in technologies; The comparatively short historical period (5-10 years) for the adoption of the above decisions, coupled with the crisis of human resources and the issue of knowledge preservation in the area of nuclear energy. This last factor may become the decisive one, and even the next few years will show how much the “great expectations” of nuclear professionals have to do with reality.
474 It should be noted that the overwhelming majority in the world’s nuclear park belongs to thermal neutron reactors cooled with light water (Table 2). ii From the report prepared by Sandia National Laboratories (USA) and Russian Research Centre “Kurchatov Institute”, April 2002. I” It should be noted that Argentina, Brazil, Canada, Republic of Korea and Switzerland participate both in GIF and INF’RO.
475 Table 1.
Nuclear power reactors in operation or under construction in the world (IAEA data as of December 2002).
tors in
5 5oror 5 5oror Note:
The total includes the following data in Taiwan, China: - 6 units, 4 884 MW(e) in operation; 2 units, 2 700 MW(e) under construction; 34.09 TW(e)h of nuclear electricity generation, representing 21.57% of the total electricity generated there; - 128 years 1 month of total operating experience.
476
Table 2.
World nuclear park (by reactor types)*.
55oror 5 5oror 5 5oror * **
Atomwirtschaft, No 5,2003. Including BN-600, Monju, Phoenix
Table 3.
World energy development.
Table 4.
State and projection of the world's annual consumption of hydrogen, million tons.
Year 1 1978 1 1985 Hydrogen consumption, million 1 11.5 1 13.6 tonslyear Energy required for hydrogen 1 production, million TOE: minimum (3 kWh/m3) medium (4.5 kWh/m3) 100 120 Minimum thermal capacity of 1 110 1 130 high-temperature reactors required for hydrogen production with their use, GW Source: Assessments of RRC "Kurchatov Institute"
1
1
1 1
1 1
2000 50.0
520 460
ENERGY DEMAND GROWTH IN CHINA: THE CRUCIAL ROLE OF ENERGY EFFICIENCY PROGRAMS MARK D. LEVINE Lawrence Berkeley National Laboratory, Berkeley, USA INTRODUCTION China faces a wide array of energy issues. Among the most important of these concerns the nature and rate of energy demand growth. Issues regarding the nature of energy demand growth include (1) the relative growth in urban versus rural areas; (2) the relative growth of commercial energy demand to substitute for the inefficient and nonsustainable use of traditional fuels (biomass in the form of agricultural wastes and forestry); (3) the relative growth of industrial, residential, service, transportation, and agricultural energy demand. The most important issue regarding the rate of energy demand growth concerns the linkage between energy and economic growth. China has been remarkable among developing countries in cutting the growth of energy demand to less than half that of gross domestic product (GDP)for more than two decades. This experience is especially significant because most developing countries, during periods of expanding economic output (as China has done at a torrid pace over this period), increase energy use per unit of GDP as a result of employing more modem technology that substitutes energy for other inputs. But China has shown, not only that energy intensity need not increase to accompany modernization, but that substantial decreases are possible. All of this is part of a continuing saga. As China has changed its economic system, many of the policy structures designed for a centrally planned economy are no longer functional. This is, to some extent, the case for many of the energy efficiency programs developed originally in the early 1980s. Thus, China faces many challenges moving forward in assuring that energy efficiency continues to be effective. These challenges, and the manner in which they will be addressed, are of great importance for the economic and environmental evolution of China and for the rest of the world that is concerned about prospects of limiting the growth of greenhouse gases. ENERGY EFFICIENCY IN THE OVERALL CHINESE ECONOMY Phase I: “Soviet Style” Energy Policy (1949-1979) In the early years after the Communist control of China, the main role of energy was to support the development of industry and primarily heavy industry. The model was that of the Soviet Union. The goal was to increase energy supply rapidly from a very small base. This goal was achieved, as energy supply grew at rapid rates: an average of 18% per year from 1949 to 1957. The Great Leap Forward, from 1957 to 1960, produced growth that was not sustainable (and much of which undoubtedly existed only on the data entries of government officials, as “backyard” coal mines reported illusory coal production). The growth rate from 1962 to 1979 averaged 9% per year, about the same rate as officially reported growth in GDP and perhaps one percent per year higher than actual GDP. Energy demand in these periods is shown in Figure 1.
477
478
This growth in energy supply and demand was spurred by subsidized energy prices for all forms of energy - coal, oil products, and electricity. A central allocation system made certain that the preferred heavy industry obtained sufficient quantities of energy (generally coal), although frequent shortages often put pressure on the system. During this period, essentially no attention was paid to the environmental consequences of producing, transporting, transforming, and using energy. This is very similar to the process that the Soviet system initially used to achieve rapid growth rates of heavy industry. At the time of the Soviet and later Chinese industrialization, it was widely believed that the development strategies of the two countries could serve as a model for other developing countries. The advantages of the approach were thought to be that heavy industry would serve as a source of growth for the entire economy and that low energy prices would provide essential services to all citizens in addition to spurring energy-intensive industry. The environmental devastation that accompanied such development is now apparent, as is the unbalanced nature of industrial development in which natural resources are very heavily subsidized. In spite of these criticisms, the focus on heavy industry, and the subsidization of energy for this industry, did achieve the goal of promoting initial industrialization where it had not previously existed. Today, however, most development economists support other approaches for spurring industrializationin developing countries. Phase 11: Deng's Initial Reforms (1980-1992) Deng Xiaoping transformed energy policy in China, although not without strong encouragement from energy experts in universities and research institutions. There were many consequences to Deng's firm target of quadrupling GDP in twenty years. One of the most important was the recognition that energy growth needed to be substantially below that of GDP growth. A quadrupling of energy over twenty years would cause very serious problems for the Chinese economy and for the environment. A quadrupling of energy supply and demand would starve the economy of capital for essential infrastructure projects because of the high capital costs of energy supply. It would also have intolerable environmental consequences, since energy systems in China are essentially uncontrolled and are responsible for a very large portion of environmental impacts on air, land, and water. The initial reforms were implemented quickly. Within one year (1981, the first year of the 6" 5-Year Plan), fully 11% of the national energy investment was devoted to improving the efficiency of energy use (Figure 2). In previous years, a small (probably negligible) percentage of energy investment was devoted to energy efficiency, as no attention had heretofore been given to energy end-use. A whole range of energy policy reforms were rapidly implemented beginning in 1981. These are summarized in Table 1. In addition to incentives for investments in energy projects - which in the early years involved allocation of direct government funds but evolved over time to low interest loans -the reforms consisted of: 0 Quotas on energy use of key industries (those consuming more than 10,000 tonnes of coal per year), 0 A monitoring network for energy use in industry, with the requirement that the key industries have staff in place for the monitoring,
479
Creation of a national network of energy conservation service centers serving all major cities and some less urban areas, A widespread public information campaign supporting energy saving, including an annual energy conservation week in November, Additional financial incentives for energy efficiency, including a sharing of the benefits of energy savings with the company achieving them (when energy use declined to lower than quota), Authority of officials to close state-run operations (equipment, plants, and whole factories) that were outmoded and highly inefficient, with procedures spelled out for such closures, An exploratory research, development, and demonstration program, and Institutions of government to run these programs. The decision to create this wide array of programs was made in 1980. Most of them were up and running within one year, not as trial or pilot programs but as fully operational entities. This is strongly indicative of the importance that China placed on moving aggressively to reduce energy intensity and improve the efficiency with which energy is used. Available evidence suggests that many or most of these programs were successful. As we will see in the next section, the aggregate national data demonstrated success in reducing the growth of energy as compared with GDP growth. Phase 111: Effects of Economic Reform on Energv Efficiency Policy (1993- present) By the early 1990s China had begun a process of reducing the role of central planning in its economy. Most new enterprises were not owned or controlled by the state. Over time, increasing quantities of goods and services were produced and sold in markets that were largely free of government control. Initially, central control over industries such as energy supply was not relaxed, as the government did not intend to relinquish authority over essential activities. Earlier efforts to increase energy prices had met with stiff resistance from ministers with responsibility for industrial outputs (especially heavy industry); the government was unable to achieve energy price reform until taking swift and effective action in the late 1990s. Because the changes to the energy supply industry greatly affects energy demand and energy efficiency programs - the subject of this paper - we need to summarize such developments. In brief, the middle to late 1990s saw fundamental changes in the operation and control of the energy supply industries. A fundamental change involved increases in oil prices to world levels and the deregulation of domestic prices of coal, natural gas, and electricity. This is a profound development in the market for energy products. It represents the recognition that central control of prices - even of products essential to the economy - was no longer appropriate and that markets would play the major role in balancing energy supply and demand in China. It is difficult to overestimate the significance of this reform of the energy system in China. The period also saw organizational changes in the energy supply industries, with ministries formally divesting control of most energy sectors. This has worked out differently in each of the energy sectors. In electricity, for example, the decision in the late 1990s to “divest” the State Power Corporation (SPC) from the government
480 represented only a partial change in the institutional authority and oversight of SPC. SPC decision makers continued to have great influence on government decisions, government authorities continued to have great influence on SPC decisions, and SPC continued to be a state-owned company. This arrangement was satisfactory neither to government officials, who often felt that their hands were tied (especially in attempts to foster reforms within the power industry), nor to SPC staff, who felt that the government was meddling in their affairs. This is now in the process of rapid reform through the creation of entirely new institutions to regulate electric utilities, as well as through the break up of the SPC into at least six different electric utilities. Each of the energy industries has undergone changes, not necessarily as significant as those affecting the electricity industry, but nonetheless of considerable import during the late 1990s through the present (2003). Energy efficiency policy has not escaped these reforms. The system of quotas for large industries was abolished, as more and more companies were no longer state-owned and could not be so closely controlled. The energy monitoring system was largely disassembled, as companies were no longer mandated to do the work of government. The low interest loan program for energy efficiency investments was no longer widely available, in large part because such favorable loan programs were often found to show favoritism to certain clients and, in many sectors, encouraged corruption. The energy conservation service centers were, in many cities and localities, starved of government support and permitted to wither if they could not find support from a variety of sources. In short, through the middle and late 199Os, elements of the most successful energy efficiency program in any developing country were substantially weakened. At the same time, a new energy conservation law - debated for more than a decade by the Peoples’ Congress - was finally passed in 1997. This defined the authority of the government over energy efficiency, and stipulated a wide variety of standards and regulations for the government to formulate and for which implementation mechanisms were to be established. It is also important to note that the prevailing structure for energy efficiency programs was only partially disestablished. The large national loan program was ended, but other decentralized programs took its place. The energy conservation service centers no longer received strong government support, but many of them survived and thrived by gaining other sources of support. Energy efficiency standards for household appliances and air conditioners were promulgated in this period. Inefficient factories continued to be closed, and outmoded energy-using equipment in other factories was replaced. Of particular importance were several new programs supported with major funds from the World Bank’s Global Environmental Facility: Green Lights, which resulted in very large improvements in the reliability of compact fluorescent lamps (and other efficient lighting equipment); a major program to support the manufacture of energy-efficient refrigerators; and an effort to create firms to offer energy efficiency services relying on the private sector. The Packard Foundation created a fund to support energy efficiency policy reform in China, administered by the Energy Foundation.
481
Impact of Energy Efficiency Programs in China Figure 3 shows the results of the energy efficiency and related programs to reduce the growth of energy demand relative to GDP. Official figures for GDP (upper curve) suggest that energy demand in 2002 is one-third of what it would be if energy had grown at the rate of GDP since 1977. A corrected estimate of GDP, with adjustments for likely overstatements of GDP growth, suggests that energy demand in China would be twice today’s levels if these policies and programs, and the practices that they encouraged, had not been put in place. As noted earlier, this is a remarkable achievement. Virtually all other developing countries have shown a pattern of energy growth during periods of industrialization in which energy rises faster than GDP. In China, it has grown less than half as fast as GDP, over a period of 25 years. Figure 4 shows that Chinese emissions of greenhouse gases (GHGs) have grown to about two-thirds of those of the United States. Since GHF emissions are proportional to energy use (assuming no major shift of energy sources), GHG emissions of China would today be twice that, or one-third larger than that of the United States, if China’s energy demand had grown at the growth rate of GDP. Of course, energy would not have grown so fast, as such growth would have starved the economy of capital and suffocated China from environmental impacts. Thus it is clear that the energy efficiency policy has been a crucial element in enabling China’s economy to perform at such high levels (averaging more than 9% real growth) for such a long period (more than two decades). It is also clear that the world is much better - in terms of reductions in GHG emissions - as a result of such policies. THE FUTURE It is extremely difficult to predict, or even project, future demand growth in China (or any other nation). Figure 3, for example, shows drops in the absolute demand for energy (not just growth) in 1999 and continuing through 2001. These reductions in demand were entirely unexpected, especially considering the continuation of a strong economy. The years 2002 and 2003 showed a recovery of energy demand greater than might have been expected based on past trends. Some of this peculiar behavior may be accounted for by inaccuracies in data, brought about by the underreporting of production and use of coal from mines that were shut down by law but not in fact closed. In spite of the difficulty of producing accurate energy demand projections, there are patterns presently visible that make it possible to formulate some predictions about the future of energy efficiency in China. First, and of greatest importance, is the observation that China has rededicated its efforts to promote energy efficiency. The initial stages of market reform signaled a transition in energy efficiency programs to eliminate policies that were geared to a centrally planned economy. In some instances the transition appears to have gone too far (e.g., in the weakening of the energy monitoring system and in failing to take full advantage of the 20,000 staff members of the many energy conservation service centers throughout the nation). However, very few essential elements of the energy efficiency apparatus have been lost. Importantly, the present leadership in the National Development and Reform Commission (the powerful organization that replaces the State Development Planning and the State Economic and Trade Commissions) is clearly committed to reviving the energy monitoring systems and
482 the energy conservation service centers as well as developing new policy approaches to stimulate energy efficiency. These approaches will take full advantage of markets whenever possible, and thus in many instances may be more effective than past approaches. Secondly, there continue to be opportunities to improve the energy performance of all sectors. Underemphasized sectors - buildings and transportation - are already receiving greater attention. The increased wealth in China has made possible demonstrations of advanced technologies in industry. Energy standards, for appliances and buildings, are moving forward. New approaches to providing energy efficiency services, such as privatized energy service companies, are being pursued and are likely to come into being. Third, major new programs, with support from important donors outside China, are being designed to strengthen energy efficiency policy and implementation. The UNDP, through the Global Environmental Facility, is providing support for a major enhancement of energy efficiency policy in China. The European Community, World Bank, and Packard Foundation are continuing their support of the energy efficiency policy in China. A key reason for this outside support is that the government of China continues to place high priority on energy efficiency both internally and in terms of requests from donors. This does not mean that increasing end-use energy efficiency in China will be easy. Nor is it certain that the emphasis on energy efficiency will continue unabated. But I believe that the next twenty-five years of energy efficiency in China will mirror the past twenty-five years. That is, it is my expectation that China will continue its commitment to energy efficiency, will continue to achieve major successes, and will continue to be an example to the world that a major developing country can sustain rapid economic and industrial development while cutting the growth of energy demand to half that of GDP or possibly less.
Figure 1. China Energy Supply and GDP (1950 to 1980)
500
EmNatural Gas
400 m
300
0
5 200
.E-
Energy Output and GDP, 19501980
a 100
1950
1955
Source: NBS
1960
1965
1970
1975
0 1980
A
483
484
c
c
v)
s
Share of Total Investment
s 0 r
s 0
-. .
Figure 3. Energy Use, Actual and Projected at 1977 Intensity, 1952-1999
4,500 n
5P
4,000 3,500
’F
Z 3,000
0 Consumption at 1977 Intensity, Reported GDP
Consumption at 1977 Intensity, Adjusted GDP H Actual Consumption
* 2,500
I5 5
2,000 1,500
.gn 1,000 500
0 1952 1957 1962 1967 1972 1977 1982 1987 1992 1997
Energy Use, Actual and Projected at 1977 Intensity, 19524999 Source: NBS
485
Figure 4. Carbon Dioxide Emissions, 1950-1997.
1600 1400
1
486
/ USA
1200 r
1000
0
5 0
I
ormer Soviet Union
800 600 400 200
0 1950
1955
1960
1965
1970
Carbon Dioxide Emission, 1950-1997 Source: ORNL
1975
1980
1985
1990
1995
Table 1. Energy-conservation policies & measures in Phase I1
Energy Management -factory energy consumption quotas -factory energy conservation monitoring -efficient technology promotion -close inefficient facilities -controls on oil use
Financial Incentives -low interest rates for efficiency project loans -reduced taxes on efficient product purchases -incentives to develop new efficient products -monetary awards to efficient enterprises
RD&D - funded strategic technology development - funded demonstration projects
Information Services - national information network - national, local, and sectoral efficiency technical service centers
Education & Training - national, local, and sectoral efficiency training centers - Energy Conservation Week - school curricula
487
RISK ANALYSIS PERMANENT MONITORING PANEL MEETING 21-22 AUGUST 2003
Dr. Terence Taylor International Institute for Strategic Studies - U.S., Washington, USA PARTICIPANTS William Kastenberg Genevieve Lester Jean Savy Terence Taylor (chairman) Eileen Vergino Henning Wegener
FOLLOW-ON WORK During the PMP follow-on meeting, Henning Wegener presented a paper entitled “Shifting Ground: Some Recent Sociological Findings on Risk” for consideration by the group (see attached). Additionally, the group reviewed and prioritized the tasks set out as a result of the first PMP meeting in May, 2003. The following were considered to be of highest priority and should be undertaken as soon as is possible: Develop a working definition of risk in the context of the objective of the PMP as stated in the first PMP report (see the report of the May 2003 meeting) [task led by W. Kastenbergl Conduct an analysis of how perceptions of risk impact high-level decision making, and how perceptions can be incorporated in risk methodologies [task led by R.van der Zwaan] Develop a comprehensive understanding of the nature of risk in order to enhance high-level policy development (Henning Wegener to develop an outline of a paper to be presented at the next PMP meeting) [task led by J. SavI Develop a better understanding of how high-level decision making for national and international security policy is conducted [task coordinated by E. Vergino with input from Lehman/Taylor/Wegener] Provide continuous review, update and cataloging of risk analysis methodologies employed by the case study areas above. [task coordinated by G. Lester with input from Savyl Conduct case studies that will illuminate the objective as stated above. [coordinated by PMP] CASE STUDIES Case studies explored during the course of the PMP should adhere to the following criteria: One or more of the studies must be cases where risk was considered in the decision process, whether ongoing or complete, including a full understanding of the methodologies used and their effectiveness.
488
489 One or more of the cases should illuminate a decision process where either risk analysis was not employed, or where different risk analysis methodologies might be applied, and explore possible outcomes if an appropriate, or different methodology has been incorporated. Case studies should include a clear elucidation of the criteria used for the decision process. Only cases that are well documented will be considered. The studies undertaken should reflect a diverse set of national and international cases. All cases must involve the highest appropriate levels of decision-making (whether regional, state, or international). Taking the above criteria into account, and drawing from the cases identified in the first PMP report, the following were considered as early candidates: Transportation - “Prestige” Oil Tanker disaster in 2003 off the coast of Northern Spain, that involved not only transportation but also hazardous materials. Public Perception - The May 2003 referendum in Switzerland on the future of nuclear energy, and the communication of risk. Epidemiology - Examine the 2002-2003 S A R S epidemic and the national and international decision processes as well as the implications for risk assessment in future public health policy development. Nuclear - Examine the decision and policy development for nuclear waste disposal with respect to the identification and final selection of Yucca Mountain. Progress on these case studies will depend on identification and selection of individuals who can complete work on these studies in the next 12-18 months. NEXT MEETING The date of the next PMP meeting is proposed for three days during the week of 10 May 2004. The core group will develop a work plan and agenda by 30 September 2003.
THE PILLARS OF INTERNATIONAL SECURITY: TRADITIONS CHALLENGED
ANDREY A. PIONTKOVSKY Strategic Studies Center, Moscow, Russia Summary: In the author’s opinion, the Iraqi crisis has shown the >agility of the modern-day international security architecture and the inability of existing international organizations to adequately react to challenges that the world community is now facing. It is possible that the time has come to strengthen international security by altering the existing world order.
THE “YALTA SYSTEM’ COLLAPSE The widely spread opinion that, since Yalta and until March 20,2003, there existed a certain international security architecture consecrated by international law and effective international institutions is a profound delusion. The bi-polar world that had existed from Yalta up to the collapse of the Berlin Wall, in 1989, was based on the currently fashionable term - “the law of the fist ” - of the two top-ranked players - the USSR and the USA. The UN and the Security Council were a kind of stage on which the world’s top stars, together with a crowd of extras, competed with each other in propagandistic declamations and ideological arguments. The real issues of security, war and peace were resolved in a different place - where the two superpowers’ dialogue took place. Let us remember for example, the most dramatic conflict of the half-century of confrontation - the Cuban missile crisis. The Security Council session, where Adlai Stevenson displayed photographs of Soviet missiles in Cuba, was quite spectacular and turbulent. However, the actual process of resolving this conflict, the record of which is now known not only day by day, but also hour by hour, had nothing to do with the Security Council. The two nuclear superpowers learned a lot from the Cuban crisis. The result of this event was the development of a series of bilateral nuclear agreements - the Anti-ballistic Missile Treaty, SALT-1 and SALT-2 (never ratified, yet observed by both parties), and the creation of permanent institutions to support these agreements. The goal of these agreements was the codification of the fimdamentally hostile relations between two entities and preventing tensions between them from escalating into military, and potentially even nuclear, conflict. War became impossible because both parties accepted the concept - nowhere openly verbalized, yet implicit throughout those entire agreements - of mutually assured destruction (MAD). The parties developed their strategic forces so as to allow both of them to maintain the potential of delivering an unacceptable degree of damage to their adversary through a retaliatory strike. Hence, the launching of a nuclear war (a first strike on enemy territory) would have automatically meant mutual self-annihilation. The MAD concept (and not the UN Charter) was the true cornerstone of the international security system during the cold war period. This system prevented a direct super-power clash that would have been fatal for the
490
491
world, yet it failed to avert dozens of local conflicts and wars in various regions of the world that destroyed millions of lives. In many of these, directly or through intermediaries,either the USSR or the U.S. -or both - were involved. The nostalgic refrain about the inviolability of national sovereignty, supposedly effective in those happy days of the post-Yalta architecture of international security, certainly sounds strange to our ears. National sovereignty was violated to the left and to the right, including by the Soviet Union. It should suffice to remember ventures into Hungary, Czechoslovakia or Afghanistan. However, it is important to note for future reference that there were circumstances when the breach of sovereignty was clearly a good thing, in the eyes of the world. The Vietnamese troops’ invasion of Cambodia was a clear breach of the latter’s sovereignty, but it saved a further third of the Cambodian population from annihilation by an insane regime. THE NEW THREATS Collapse of the bi-polar world generated certain illusions with regard to security, the extreme manifestation of which was the Fukuyama concept “the end of history.” Very soon it turned out that it was not the end of history, but the beginning of many new and unpleasant histories - the painful disintegration of Yugoslavia, conflicts in the former territory of the USSR, in Somalia, Rwanda, East Timor, etc. Finally, the events of September 11* demonstrated a new, all-out challenge posed to civilization by international terrorism. The world community found itself unprepared for all these challenges - both institutionally and conceptually. The illusions about security institutions such as the UN and the SC have already been discussed above. Another widespread fallacy was the belief in certain norms of international law - standards that would guide all nations. If that were so, all the world’s problems would have boiled down to defining an action as being legitimate or illegitimate. If only it could be that simple. Let’s review a few commonly recognized principles of international law, recorded in dozens of declarations, charters and treaties: Sovereigntyand territorial integrity of a nation; The right of nations to self-determination; Human rights formulated in the UN Declaration and reiterated in the laws of the majority of nations, including Russia; The right of states to self-defence. If we now look at any serious international problem, at any of a few dozen smouldering or flaring local conflicts, we will see how wildly contradictory those principles are. Actually, all conflicts and problems are preeminently generated by these contradictions. Anyone who took at least an elementary course in logic would know that if a system of axiomatic statements contains mutually conflicting assertions, A and non-A, any arbitrary conclusion can be derived. Contemporary international law represents exactly such a system, and because of that, practically any action of a state in the international arena (as well as its opposite) may find validation in one of the norms of international law.
492
Most advanced politicians understand this very well. Here is what RF President Vladimir Putin said during his press conference at the closure of the St. Petersburg summit of April 12, 2003: “However, in recent times many imperfections in the sbucture of international law have revealed themselves, as well as inherent inconsistencies in which, in my view, a serious potential for conflict is concealed.” He continued, “Politicians and state leaders rely on effective legal mechanisms. The inadequacy of those mechanisms may be fraught with serious implications. I am convinced that if clearly functioning legal mechanisms for crisis resolution were set up in time, far more effective solutions to the most complex world problems could be found.” Let’s now dwell in greater detail on this principle and the specifics of its application in the world after September 11th. As mentioned above, nuclear security during the cold war was based on a principle of containment, where each party was aware that its potential adversary was not suicidal. How can this principle operate now when we are dealing with suicide bombers? A new potential menace has appeared in the world terrorists with access to WMD - for which the containment principle does not work, and which can be countered only by preventive measures. The principle of the inviolability of national sovereignty has never been absolute, and, all the more, cannot be so in the contemporary world. Initially the concept of a preventive strike was very clearly and straightforwardly formulated in the “New U S . National Security Doctrine” published in September 2002. The declaration by the US, of the right to conduct preventive strikes as an intrinsic extension of the right of a nation to self-defense, has been repeatedly criticized in the Russian press. Yet, here are two quotes: ‘Tf anyone tries to use weapons commensurate with weapons of mass destruction against our countiy, we will respond with measures adequate to the threat. In all locations where the terrorists, or organizers of the crime, or their ideological orfinancial sponsors are. I underline, no matter where they are. ” ”Insuch cases, and I officially confirm this, we will strike. i%is includes preventive strikes. ” Who are these hawks, preaching a concept of preventive strikes violating the sacred principle of national state sovereignty? Donald Rumsfeld, Paul Wolfovitz, Dick Cheney, Condoleezza Rice? The first quote comes from President Vladimir Putin’s speech at the October 28, 2002 session of the government. The second is a statement by Defense Minister Sergei Ivanov, made even earlier, on September 22,2002. Vladimir Putin’s declaration was an official order by the Supreme Commander-inChief to the appropriate government agencies to develop a new Russian military doctrine that would include the concept of preventive strikes in response to threats against which the traditional deterrence concept proved ineffective. It looks as if each nation, taken alone, would adopt for itself, with ease and enthusiasm, the concept of preventive strike, derived from the principle of the right to self-defense, yet would be rather critical of the readiness of other nations to adopt a similar concept. Who indeed, will, in this case, define whether the preventive strike is legitimate, and the extent of its validity in regards to the actual threat? The Security Council? Has the Security Council ever defmed anything? During the Cold War, when its uselessness
493 was obvious, or in the subsequent decade, when it demonstrated its helplessness, having not been able to prevent or stop any of the conflicts that mowed down hundreds of thousands of lives in the former Yugoslavia, the former USSR, Rwanda, Somalia or Afghanistan?
W O K D GOVERNMENT IS AT HAND The increasingly chaotic character of the modem world, the challenges of radicalism, terrorism, and the proliferation of weapons of mass destruction generate an objective demand for some form of non-fictitious (UN, SC) but real world government. Demand gives rise to supply. After September 11, 2001, the U S . has been attempting to play this role. This situation doesn’t seem to satisfy anyone, including the Americans themselves. Confiontation with the U.S. and the formation of various anti-American axes will only lead the U.S. government to become more intransigent and, at the same time, less efficient (with negative implications for the world at large) the more their isolation increases. Pleas to return to a certain “system of international security,” allegedly destroyed by the Iraqi crisis, are totally vain, be they sincere or false. There never was such a system; there were not even conceptual approaches adequate to the challenges of the contemporary world. More and more, the world community should focus on the development of both the concept and the institutions for a new world order. First of all, it is necessary to turn to the problem of conflict within the various principles of international law and try to develop some reasonable rules of balance between them. Yet there should be clear awareness of the fact that, even with every potential improvement to the norms of international law, the solution to the problem cannot be purely legalistic. It will always be political. It is impossible to invent an abstract scheme suitable for the resolution of any emerging conflict, in which both democratic nations and totalitarian regimes bent on obtaining nuclear arms are equal actors. Only an alliance of responsible world powers, united by a common vision of the problems and challenges facing the modem world, sharing common values and having the resources - political, economic and military - to implement their joint policy, can perform the role of efficient world government. The structure best able to meet these requirements is the Group of Eight. Russia, having become a full member of this framework, has an objective interest in the G8 expanding its area of responsibility into the sphere of international security. Because of the traditionally informal and confidential nature of discussions within the G8, it is the most useful forum for the realization of joint decisions on key issues of world politics. The U.S. will remain a leader within this eight (and in the future, maybe, nine or ten), yet constructive and open discussion of the current key policy issues would allow the leading powers to develop a culture of consensus. It is in the common interest of the world community not to alienate the U.S. but to convert it into a responsible leader accounting for the interests and concerns of its partners.
494
The United Nations, with its enormous bureaucratic structure, certainly will not disappear. It could play the role of organizer of joint decisions made by the leading powers. Such a transformation of the G8 into a leading international security institution is impossible without Russia’s participation. Full participation in the G8 is a very important political resource for Russia. In our opinion, it is much more important than Russia’s permanent membership on the Security Council - a position based on inertia, exaggeration of our diplomatic attributes, and inherited after the disintegration of the USSR superpower. The G8, as an institution for global security, would simply be ineffective without Russia, which is geographically adjacent to the sphere of instability that poses the worst potential threat to the world. For the same reason, Russia will not be able to maintain its security outside an alliance with the leading industrial nations.
SAFETY AS A RESULT OF PROVIDING INFORMATION VLADIMIR BRITKOV Institute for Systems Analysis (Russian Academy of Sciences) Moscow, Russia ABSTRACT The current stage of the information revolution is characterized by a large increase in the information being stored on computers. By some estimates, this volume of information doubles every 9 months. Important and responsible decisions are being taken on the basis of computer information and information technology. People will make decisions that depend on the information to which they have access. On the other hand, this raises an ensemble of problems and creates a real threat to information safety in the sense of dangers resulting from incomplete, unauthentic and undue information. Consideration of information safety, from this point of view, necessarily means resolving the following problems. ENSURING OPERATION OF THE FREE INFORMATION MARKET Restricting the spread of information provides broad possibilities to manipulate human behavior. Presenting information &om a specific point of view allows the owner of the information flow to bias people to his own profit. Consequently, it is necessary, in broad sense, to put a stop to the possibility of monopolization of the information media. A particularity of modem times is the huge role played by the Internet, which allows the creation of information resources in the most democratic and independent way. It is thus possible to draw the conclusion that comprehensive Internet expansion is one of the ways to increase the level of information safety. THE DANGER OF USING INFORMATION TECHNOLOGY IN SUCH BRANCHES OF HUMAN ACTIVITY AS THE STOCK MARKET Over the last years, financial crises have occurred repeatedly (the Asian crisis and others), as well as sharp collapses in the stock market (the dot.com companies crisis). There are many hypotheses as to the cause of these crises. From our point of view, the main reason for these crises is information technology. The stock exchanges define the main direction of production development. Therefore it is expected that probable success will occur on the stock exchange, as a result of adding the necessary direction and values (the velocities of the motion in this direction) to the most promising technological development. During recent years, fantastic tools have appeared in network communications, and have become readily available tools for the achievements of financial mathematicians, and provide intellectual help for computer systems. Since scientific technical achievements are practically all alike, all brokers receive the same information on the sales results and, armed with the same methods, they take the same decisions. As a result of so-called positive feedback, collapses occur. If all the passengers on a steamship go to the same side of the boat, the steamship will capsize.
495
496 METADATA SYSTEMS DEVELOPMENT, AS A METHOD OF INCREASING THE VOLUMES OF INFORMATION FOR PROBLEM SOLVING It is necessary to develop improved data inventories; to work out integrated databases, including different data types (factographic, spatial, textual, graphical) and metadata; to standardize metadata and co-ordinate programs and nations in this field. Proposals for organizing of metadatabases on several levels (local integrated databases, static and dynamic pages of Web sites) are given. PROVIDING INFORMATION FOR EMERGENCY MANAGEMENT Emergency situations caused by natural or anthropogenic reasons, including terrorism, such as the S A R S problem, occur often in our lives. The particularity typical of the studies, as presented in the report, is a system of approach to considering a problem, through which the whole cycle of information handling is searched, from the input flow to the final decision-making. We have named this approach the “Information Modeling Method”. The latest achievements in the development of knowledge-based systems (the artificial intelligence systems, expert systems), and decision-making computer methods have settled the problem of making the systems, (allowing for a reduction of the miscellaneous consequences of that type in an emergency situation), by integrating decision-making, management and undertaking action into the conditions of an emergency situation. In the paper, we consider methodological issues of information technology use with Knowledge Based Systems, Data Mining, KDD (Knowledge Discovery in Databases) technologies for the development of multidisciplinary integrated methods, algorithms for the efficient use of large volumes of information. In the paper, methods of the decision making system development are considered, based on knowledge for use in emergency situations. Specifically proposed methodologies are included in a multidisciplinary approach to the creation of an intellectual decision support system allowing the realization of efficient methods for exhibits in the field of emergency management, characterized by a greater volume of information analyzed, little formalized procedure for the inference for decision making and difficulty of the use of the traditional multi-criteria optimization methods. The emergency situations, such as terrorism, high water and floods, which have created the most problems recently, require the development of the existing methods of decision-making and management. INFORMATION DANGER AND SAFETY AS A RESULT OF THE ACCUMULATION OF INFORMATION ON PEOPLE’S BUSINESS ACTIVITIES AND PERSONAL ACTIVITIES As a result of active informatization in computers, a huge volume of personal information has accumulated: Concerning financial and bank activity; In telephone companies and other organizations. This information can, on the one hand, be used in criminal investigations. On the other hand, unless a way is found to provide for its secrecy and safety, the possibility remains that this information may be used maliciously against clients.
497
INCREASING SAFETY LEVELS THROUGH THE USE OF INTELLECTUAL TOOLS BY PROCESSING A GREATER ARRAY OF INFORMATION There are methodological issues to be considered in the use of the system approach using the Knowledge Based Systems, Data Mining KDD (Knowledge Discovery in Databases) technologies for development multidisciplinary integrated methods, and algorithms for the efficient use of large information volumes. THE INTERNATIONAL EMERGENCY MANAGEMENT SOCIETY (TIEMS) ACTIVITY The International Emergency Management and Engineering Society was created with the purpose of bringing together users, planners, researchers, managers, response personnel and other interested parties to exchange information on the use of innovative methods and information technologies to improve our ability to avoid, mitigate, respond to, and recover from natural and technological disasters. Since 1966 TIEMS conferences have all engaged in these important and far reaching subjects of emergency management, and the international conferences are arranged in order to address these complex issues.
ANTICIPATORY DEFENSE: ITS FUNDAMENTAL LOGIC AND IMPLICATIONS
REmTER K. HUBER Institut fiir Angewandte Systemforschung und Operations Research Fakultat fiir Informatik, Universifat der Bundeswehr Miinchen Neubiberg, Germany ABSTRACT The notion of preemption is one of the core elements of the new National Security Strategy (NNS) of the United States. There, the traditional definition of preemption as the anticipatory use of force in the face of an imminent attack is extended to encompass prevention, i.e., the use of force without evidence of a clear and present danger to national security in order to eliminate a potential threat to the United States before it materializes. While the preemptive use of force is supported by international law and the just war tradition, because of the potential implications for international order, critics consider the inclusion of preventive military action under the category of preemption as womsome. Therefore, the question arises as to the conditions when preventive military action would be the only alternative to protect one's security, in which case it would present an act of anticipatory defense. In this paper an attempt is made to separate the discussion of anticipatory defense from the discussion of the NSS by analyzing a simple mathematical model of threat perception in order to investigate the fundamental military-strategic logic, and some of the problems underlying the concept of anticipatory defense in general, regardless of whoever may consider its implementation. The model assumes that threat assessment involves a three-stage process of assessing the aggressive intentions of the party in question, its military risk attitude, and the capability of one's own defense in case of an attack. An analysis of the model shows that it is the existence of weapons of mass destruction (WMD) and stealthy means and mechanisms for their delivery - such as terrorists or ballistic missiles (BM) in the absence of an effective anti-ballistic missile (ABM) system - that have the potential for bringing about conditions in the real world which are without a military alternative to anticipatory defense. In order to minimize the fallout for international order and civilian populations, the legitimacy of the preventive use of force should be made contingent to at least three conditions being met: (1) clarity of protective/defensive purpose; (2) capability to keep collateral damage to a minimum; (3) obligation to restore whatever damage the operation may have caused.
498
499
INTRODUCTION The notion of preemption is one of the core elements of the new National Security Strategy (”S) of the United States. There, the traditional definition of the anticipatory use of force in the face of an imminent attack is extended to encompass preemption by using force without evidence of a clear and present danger to national security in order to eliminate a potential threat to the United States before it materializes. In her Wriston Lecture, delivered to the Manhattan Institute on 1 October, 2002, Condoleezza Rice pointed out that the NNS considers preemptive military action only after all other means, including diplomacy, have been exhausted: “Preemptive action does not come at the beginning of a long chain of effort. The threat must be very grave. And the risks of waiting must far outweigh the risks of action”. [Ric 021 Nevertheless, the preventive actions taken by the United States in Afghanistan and, in particular, Iraq have led to considerable irritations in transatlantic and, one should add, inner-European relations as well. Whatever may have motivated some of Europe’s leaders to denounce U S . actions against Iraq, public response suggests that most Europeans did not regard the threat as very grave and, therefore, preventive action simply as a “war of aggression” rather than an act of anticipatory defense. In the public debate before and during the war, attempts to explain the rationale underlying the concept of preemption were mostly met by assertions about presumed American motives and inappropriate historical analogies. Therefore, an attempt is made in this paper to separate the discussion of anticipatory defense from the discussion of the NSS by analyzing a simple analytical model of threat perception in order to investigate the fundamental military-strategic logic and some of the problems underlying the concept of anticipatory defense in general regardless of whoever may consider its implementation. In considering the analysis presented below it should be remembered that analytical models represent more or less abstract replications of systems andor processes in the physical world. They cannot capture, explicitly and in detail, the complexity of systems and processes that are essentially social in nature such as anticipative defense which involves not only military-strategic and technological dimensions but a host of ethical, legal, political, technological, and economic dimensions as well. Nevertheless, analytical models are well suited for analyzing certain fundamental aspects of real world issues and their implications. To this end, it may not even be necessary to explicitly model some of the processes involved as long the model’s variables capture them implicitly, and can be estimated based on empirical evidence or expert judgment in case the model is used to perform computational experiments.” Most critics of the use of analytical models for the analysis of social systems and socio-political processes tend to forget that it is not the model that produces results but the analyst who uses the model as one tool among others. Besides, critics are rarely aware of the fact that the so-called verbal analysis methods they favor are based on models as well, albeit unstructured ones residing in the minds of analysts who conjure them up either on the basis of historical evidence or in the context of a specific contemporary issue in the real world.’” In contrast to computational experiments with formal models, the results of mind experiments are not reproducible and, therefore, conclusions may hardly be generalized and tend to be anecdotal in nature. In addition, mind models do not permit clear separation of facts from value judgments which is one reason why debates of strategic and societal issues often get
500 caught in emotional ideological exchanges while a detached discourse of pros and cons based on reproducible data would be needed to qualify value judgments. THREAT PERCEPTION MODEL The threat perception model analyzed in this paper was first proposed by an international study team commissioned by NATO to investigate hypotheses related to conventional military stability in Central Europe. The core question was how military organizations in a region must be sized, structured, equipped, deployed, and operated so that no party to the regional international system had a reason to perceive its security to be threatened by the other parties. Particular attention was to be paid to strengthening crisis stability between NATO and the Warsaw Pact by uncovering conditions that might provide either side with military incentives for preemptive use of force in a crisis [RSG 18 and Hub 961. The model assumes that the threat assessment of a party - a state or an alliance - vis-a-vis another party basically implies a three-staged process that may be captured by a simple binary scheme as depicted in Figure 1. Accordingly, the military situation between the two parties is considered as stable if unilateral assessments by each of the two parties arrived at the conclusion that the other party denoted by X either: Had no intention to attack or Was not inclined to take high military risks or Could be repelled if it were to attack. Of course, it goes without saying that in most cases threat assessments will not arrive at definite answers - yes or no - as suggested by Fig. 1. Rather, they must be expected to be characterized by some degree of uncertainty that may be expressed in terms of probabilities (defined in terms of values between 0 and 1) as shown in Figure 2. intent to attack by X
military risk aversion of X no success of defense against X no attack
attack repelled Fig 1 :Threat Perception vis-&vis Party X
unstable situation
501 intent to attack by X
military risk aversion of X
success of defense against X
attack repelled
unstable situation
Fig 2: Probability Tree of Threat Perception vis-2-vis Party X.
If it is assumed that the security requirements of the party assessing the threat posed by party X are satisfied if the probability for the unstable state does not exceed a certain threshold value K, then the following condition holds: 1 - PIG(PA)(l- W ) 2 K
(1)
1;P, 2 P* G W = { O;P, < P *
with the notations: K = threshold probability satisfying security requirements PI = probability that party X intends to attack probability of attack by party X succeeding P? = P = minimal attack success probability required by party X (risk threshold) probability of successful defense against party X. W= Condition ( 1 ) is satisfied if either PI= 0 (there are no aggressive intentions perceived on part of X) or G(PR) = 0 (military risk associated with attack by X is perceived to be to too high since PA< P*). If, however, PI # 0 and X must be regarded as not being risk-averse (G(PR) = l ) , i.e., there is uncertainty about the value of the risk threshold P* demanded by X, or the value of P* is low enough to satisfy the condition PA 2 P*, the minimal probability of repelling an attack by X that meets the security requirements vis-i-vis X results by solving equation ( 1 ) for Was
w 21-- 1-K
= W*. PI Equation (3) holds as long as PI> 1 - K*. Otherwise FV= 1.
(3)
Fig. 3: Shows that the threshold value W* is largely insensitive to variations of PI if security requirements are high (e.g. K 2 0.95). Thus, in that case there is no need
502 for the notoriously difficult assessment of an opponent's intentions by assigning a value to the probability PI. Moreover, if a party were certain that it would be attacked by X if an opportunity arose (i.e. PI = l), condition (3) simplifies to
W>W*=K.
(4)
As envisaged at the time, conventional stability required that the threat assessments of both parties - NATO and WP - anived at the conclusion that the mutual conventional force structures and operationalhactical doctrines meet condition (1). Therefore, it is safe to state that the military situation in the Central European region was not very stable during most of the Cold War, in particular in crisis situations.'" The apparent Cold War stability was lastly due to the nuclear arsenals of the United States and the Soviet Union, which involved the risk that military conflict between the two camps might escalate ending in mutual destruction of both sides.
Fig. 3: Minimum Defense Success Probability W* vis-b-vis state X required at a given security level K* as a function of the estimated Probability PI that X harbors aggressive intentions. CONCLUSIONS FROM THE MODEL Even though the model was originally designed to develop easily reproducible criteria for assessing conventional stability in a well defined bipolar context, it does, nevertheless, capture the essential ingredients for assessing, in numerical terms, the degree of potential threat posed by any type of antagonist, provided there is reasonable historical evidence to estimate the military risk threshold P*, and the dependent variables PA and W in equation (1) on the basis of expert judgments, historical evidence, and computational or simulation experiments." Short of numerical
503 results obtained from such sources in the context of a defined threat, however, the model does provide a structure for deriving the principal options that parties have for meeting their security requirements given the nature of the international security environment. In a cooperative international environment, the majority of parties share the common objective of acting in a manner that is conducive to international stability. Cooperative security environments are characterized by parties which are either not perceived to harbor any aggressive intentions (PI = 0) or agree on confidence building measures and negotiate, and conclude, formal agreements on arms limitations and force deployment constraints so that the security requirements of all parties captured by the conditions W 2 W* (security criterion) or PA < P* (sufficiency criterion) are met."' In an uncooperative security environment (PI 4 l), however, parties are forced to satisfy their security requirements unilaterally. And if the environment is outright hostile, parties are inclined to perceive many an action on part of presumed opponents as attempts to increase their offensive potential vis-a-vis themselves to the degree that PA2P*and W+ 0. Such a perception by parties who consider themselves as potential victims of aggression would be justified if there is strong evidence that the respective opponents: A building up their arms inventories; Develop means and procedures for surprise attack, and Pursue policies to reduce the victim's defense potential by subversion and/or through policies of alienating allies from the victim. Excluding the option of appeasement, the potential victim's response options are aimed at strengthening its own defensive potential so that W 2 WY and, at the same time, reducing the offensive potential of opponents to the degree that PA< P*. There are essentially five principal categories of options for responding to perceived threats: Limiting the offensive potential of opponents through agreements, with the governments of like-minded countries, on export limitations of crucial technologies and arms (arms export controls and counter-proliferation policies) including arms embargos; Increasing the active defense potential for defeating threats in a reactive manner; Reducing the vulnerability of the active defense potential and of high value targets as well as populations; Deterrence by threatening punishment in case of aggression, and Preemptive/preventive counter-force attacks to neutralize and/or destroy threat potentials. The first three categories include the traditional options of defense-oriented (status quo) parties for responding to emerging threats. They are legitimized by international law and Article 5 1 of the Charter of the United Nations and, therefore, are not controversial, at least in principle. However, their implementation to a degree that satisfies the security criteria may involve economic costs and expenditures that the parties cannot or may not be willing to shoulder."' Moreover, they may be of limited effectiveness, especially in the case of threats posed by non-state actors and governments of rogue states. This is also true for deterrence because essential prerequisites for it to work seem to be missing in those cases."'" That leaves preemption and prevention in order to mitigate or eliminate the threat before it becomes effective. The difference between preemptive and preventive
504 response is that the former is directed against an immediate threat, i.e. a clearly identified threat in the process of being activated for attack, and the latter against an emerging threat that may become an immediate threat sometime in the future unless dealt with now. Both make good strategic sense and are in many cases superior in cost-benefit terms to not acting or waiting until an attack has begun. However, while preemption is supported by international law and the just war tradition, prevention has no basis in current international law.” Thus, since it implies both preemptive and preventive use of force, the concept of anticipatory defense is bound to cause controversy. Therefore, an important question to be answered is under which circumstances anticipatory defense must be considered the only option that a potential victim has left to meet its security requirements and bring about some degree of stability. Figure 2 suggests that this would be the case if the values of both the probability of successful defense W and the attacker’s risk threshold P* were to become small which implies that the value of the victim’s security threshold K must be small as well (K -+ 0) in order to satisfy condition (1). In other words, the threat perception model tells us that there is no security for the victim when W -+ 0 and P* -+ 0. He is prepared either to live with the unstable situation and surrender once the threat materializes, or use force in a preventive manner in order reduce the threat potential to a degree that his security requirements are met. If we interpret an attack not as large scale aggression of one state or alliance against another - as was implied when the model was originally formulated -but as a massive attack against soft targets such as the one on September 11, 2001, it is the existence of weapons of mass destruction (WMD) and stealthy means and mechanisms for their delivery - such as terrorists or ballistic missiles (BM) in the absence of an effective anti-ballistic missiles (ABM) system - that have the potential of bringing about conditions in the real world which correspond to W + 0 and P* + 0 in our model. Therefore, a closer look will be taken at both threats.
INTERNATIONAL TERRORISM It is not individual terrorists or groups of terrorists that would be the object of preemptive and preventive defense by military means but their staging areas, supply and training infrastructure, and shelter available to them in weak and failed states as well as states and factions that support them. The trends of global demographic, environmental, economic, societal, and technological developments - as identified, for example, by the United Kingdom’s “Project Insight” [Ham 991 or Germany’s project “Armed Forces, Capabilities and Technologies in the 21St Century’lx - suggest that the terrorist problem will most likely worsen as the number of weak and failing states must be expected to grow. This is because, in addition to Islamic fundamentalism, a fundamental change in warfare has taken and still is taking place. Wars between states as in the first part of the last century are increasingly being replaced by intra-state (civil) wars between ethnic and religious factions and groups interested in ongoing conflict for economic reasons, and by international terrorism and organized crime. The observation that only 15-20 percent of all wars and warlike conflicts since 1945 were conducted between states has led political scientist Herfried Muenkler to propose the hypothesis of “privatization of war” [Mue 011. As a consequence, the world will increasingly witness situations characterized by what von Johannes Kunisch (1973) refers to as “small wars” similar in nature to those fought in the European middle ages with
505 feudal levies and short-term contract mercenaries (condottieri) before standing armies emerged [Kun 731.” Thus, the state as the only legitimate party for conducting war is about to be replaced by non-state actors such as warlords, guerrilla groups, and criminal organizations which live on war and, therefore, have no interest in ending war and violence. Muenkler considers the terrorist attacks of September 11 on the World Trade Center and the Pentagon as dramatic manifestations of that trend, the consequences of which may be dire indeed if weapons of mass destruction (WMD) come into play. Unless appropriate means of prevention are developed, failing and weak states like the Taliban’s Afghanistan will be used and even hijacked by such actors to provide staging grounds for their operations. And rogue states may not resist the temptation to supply such groupings with know-how and WMD, or even employ them, unwittingly or covertly, as mercenaries in the hope of avoiding discovery and evading retaliation [Hub 031. In order to cope with these trends, preemptive and preventive military intervention to eliminate WMD and deny terrorists the use of weak and failing states as platforms for their purposes must be considered the immediate mission of anticipatory defense. However, the situation in Afghanistan and Iraq both underscore that post-war “nation building”, i.e. the provision of security and technical and financial assistance for rebuilding the economic and social infrastructure of the target countries, is an important element of military, intervention operations within the framework of a preventive defense strategy.”” Also, intervention operations that ultimately are aimed at providing stability in a region would offer little opportunity for international legal controversy because their objectives are, to a large degree, humanitarian since the target states are either unwilling or too weak to protect the elementary human rights of their citizens.x111 Another important element of a preventive defense strategy in the face of WMD was proposed by Paul Davis in form of a credible declaratory policy threatening anyone who even tolerates WD-related terrorism. “Not only active supporters (should be punished), but even those states and factions that merely tolerate the terrorists or indirectly facilitate their acquisition of WMD. The purpose would be to so alarm heads of state and sub-state organizations that they would work actively to get rid of elements that might bring destruction upon them.” [Dav 02, pp. 401. Davis also concludes that terrorists themselves, while not to be individually deterred in the traditional sense, may be “influenced” to eventually give up by forcefully and relentlessly holding at risk and attacking, not necessarily by military means, what is dear to them. To this end he recommends to develop an orchestrated “Broad-Front Strategy” that cuts across all of the normal boundaries of war - military, diplomatic, economic, and law enforcement. BALLISTIC MISSILES Other than the five declared nuclear powers, nearly 25 states have acquired, or are about to acquire, ballistic missiles (BM) and/or the know-how and technology required for their domestic production ([Rum 981, [Wil 001, [Sch 011). Most notable among them are India, Pakistan, and the rogue states North Korea, Iran, Syna and Libya. Even though today’s delivery accuracy of ballistic missiles produced in emerging missile states may be rather low, in conjunction with nuclear, bacteriological, or chemical warheads they represent a formidable threat to urban targets and military facilities alike. Most of the current arsenals of rogue states consist
506 of SRBM (Russian Scud and Scud improvements) with ranges up to about 600 km. However, there are several indigenous IRBM development programs such as the north Korean Taepo-dong and the Iranian Sahab. If launched from North Africa, the 2,000 km range (presumably operational) Taepo-dong-1 and the Sahab-4 would be sufficient to cover most of Southern Europe, the Balkans, and Turkey. The IRBM Taepo-dong-2 and Sahab-5 expected to become operational about 2005 will have a range of 5,000-6,000 km capable of reaching, from the Middle East, targets anywhere in Europe, Western Siberia, Central and South Asia and North and Central Afiica. North Korea and Iran are suspected of developing ICBMs with ranges of up to 12,000 km that could become operational within 10-15 years. From the viewpoint of international law, BM present a particularly challenging problem for anticipatory defense. This is because, in order to preserve its security, the country which perceives itself as a potential victim of missile attacks has only the option of a preventive attack against the BM andor its WMD warheads and their production and storage facilities unless, of course, it has deployed an effective ABM system.xivIt must be remembered, that an operational BM system represents a threat not only for a particular country, but for the entire region within its range. Which country is being targeted will only be known sometime after the missiles have been launched so that their trajectories can be determined. And even if launch preparations were discovered early, and countries were willing to preempt on the suspicion of being targeted, it would probably be too late. Nevertheless, despite the fact that all of them may, in a not too distant future, be within range of BM from the Middle East, Europeans show little enthusiasm as yet about both deploying ABM systems and considering preventive response as an option. For one thing, besides being rather expensive, development and procurement of ABM systems is still a controversial issue even though the debate has abated as the public begins to realize that none of the dire predictions about the consequences of the U.S. withdrawal from the ABM treaty have materialized so far. On the other hand, an effective preventive response may very likely require new and even more controversial weapons (such as low-yield nukes) as production and storage facilities of BM systems, in particular of their WMD-warheads, will be relocated into deep underground bunkers. CONCLUSIONS From the above analysis we can conclude that anticipatory defense must be regarded as an important, and in many cases the only, effective countermeasure allowing states to provide security for its citizens in the face of observed global trends related to international security and the availability of WMD. However, unless exercised in a prudent manner, there is the risk that preemptive and preventive military actions may generate highly adverse consequences for international order. That is why critics suggest that priority should be given to addressing root causes of terrorism and uncooperative behavior of governments rather than focusing on controversial countermeasures. In terms of the model discussed above, focusing on root causes means trying to reduce the value for the probability PI of the intention to attack. With a view to the characteristics of terrorist and ballistic missile threats in conjunction with WMD, however, the model tells us that high security requirements on the part of potential victims of attack are not satisfied as long as PI + 0. In other words, feeling secure requires that the root causes and motivations underlying the aggressive attitudes of
507
threatening parties must be practically eliminated, Whether this is at all possible is open to question. In any case, addressing root causes may take significant intellectual, political as well as economic efforts and, above all, time, during which there is no alternative to countermeasures unless societies are willing to live with the risk of becoming victims of WMD. Thus, there is a need to address both countermeasures and root causes with priorities shifting from the former to the latter as the success of countermeasures becomes evident."' In addition, the standards of current international law"" will have to be modified to account for non-state and asymmetric threats that states will increasingly face in the future. To this end, the scholar of international law Armin Steinkamm [Ste 031 distinguishes two principal approaches that might be pursued: Adapting the %-year old Charter of the United Nations to cope with the threat of international terrorism and WMD; Evolving international law based on the practice of preemptive military actions by states which consider such practice as legitimate in fighting terrorism and WMD. The first approach requires a broad debate and consensus in both the United Nations General Assembly and the Security Council. However, the difficulty with this approach is underlined by the fact that the United Nations has been grappling with the issue of terrorism for close to forty years. So far, the multilateral forum was unable to agree even on a definition of terrorism. Whether the new issue of WMD will accelerate that debate and bring forth a consensus is not certain. Therefore, it seems reasonable to assume that nations that feel most threatened by international terrorism prefer the second approach.xv" Irrespective of the preferred approach, however, it is not unlikely that the process of adaptation will be delayed considerably if no WMD are found in Iraq. Jeffrey Gedmin, the current director of the Aspen Institute in Berlin, suggests that this could even imply the end of the "Bush Doctrine", of preemptive military intervention, because it will be highly unlikely that Congress and the American public would support another preemptive operation unless the issue of the Iraqi WMD threat is resolved [Ged 031. One can only hope that it will not take an act of massive WMD terrorism before the international community becomes convinced that international law needs to acknowledge the legitimacy of defensive preemption when faced with the threat of WMD while at the same time not providing a pretext for inter-state aggression. In order to minimize the fallout for international order and civilian populations, this author proposes that the legitimacy of the preventive use of force be made contingent on at least three conditions being met: (1) Clarity of defensive purpose, Capacity to keep collateral damage to a minimum, and (2) Obligation to restore material damage caused by military intervention. (3) REFERENCES [Arm 931 Armitage, Michael: History of Airpower. In: Dupuy (Ed.): International Military and Defense Encyclopedia. Washington-New York 1993: Brassey's, pp.8293
[CBP 021: NATO Code of Best Practice for C2 Assessment. Washington, D.C., 2002: CCRF' Publication (www.dodccrp.org)
508 [Cla 841 Carl von Clausewitz: On War, Book Eight, Chapter Three. Indexed Edition edited and translated by Michael Howard and Peter Paret. Princeton 1984: Princeton University Press [Dav 031 Davis, Paul K. and Brian Michael Jenkins: Deterrence and Influence in Counterterrorism - A Component in the War on a1 Qaeda. Santa Monica 2003: RAND [Ged 031 Gedmin, Jeffrey: Dann ware die Doktrin von der Praemption tot. Franllfurter Allgemeine Nr. 174 I 3 1 D, Mittwoch, 30. Juli 2003 [HFL 991 Huber, Reiner K., Friedrich, Gemot and Jaroslav Leszczelowski: A New Paradigm for Estimating Russian Force Requirements? On Tsygichko’s Model of Defense Sufficiency. European Security, Vol.8, No.3 (Autumn 1999), pp. 101-123 [Hub 961 Huber, Reiner K.: Military Stability of Multipolar International Systems: Conclusions from an Analytical Model. In: Models for Security Policy in the PostCold War Era (R.K. Huber and R. Avenhaus, Eds.). Baden-Baden 1996: Nomos Verlagsgesellschaft, pp. 7 1 - 8 1 [Hub 021 Huber, Reiner K.: Ballistic Missile Defense: A Risk for Stability and Incentive for New A r m s Races? In: International Seminar on Nuclear War and Planetary Emergencies 26Ih Session (A. Zichichi and R. Ragaini, Eds.), Singapore 2002: World Scientific Publishung Co., pp. 71-84 [Hub 031 Huber, Reiner .K.: The Transatlantic Gap: Obstacles and Opportunities for Closing it. In: Transforming NATO Forces: European Perspectives (C.R. Nelson and J.S. Purcell, Eds.), Washington 2003: Atlantic Council of the United States, pp. 59-78 [HuS 931 Huber, Reiner K. and Otto Schindler: Military Stability of Multipolar Power Systems: An analytical concept for its assessment, exemplified for the case of Poland, Byelarus, the Ukraine and Russia. In: International Stability in a Multipolar World: Issues and Models for Analysis (R.K. Huber and R. Avenhaus, Eds.). BadenBaden 1993: Nomos Verlagsgesellschaft, pp. 155-179. [Kue 991 Kiihne, Winrich: Humanitiire NATO-Einsatze ohne Mandat? (TeilI). Reader Sicherheitspolitik IV.3. Bonn 1999: Bundesministerium der Verteidigung [Kun 731 J. Kunisch, J.: Der Kleine Krieg: Studien zum Heerwesen des Absolutismus. Frankfurter Historische Abhandlungen. Wiesbaden 1973: Steiner [LAR 021 Lieber, Keir A. and Robert J Lieber: The Bush National Security Strategy. Foreign Policy Agenda. Washingto 2002: US. Department of State, pp.32-35 [Mue 021 Muenkler, H.: Die brutale Logik des Terrors: Wenn Diirfer und Hochhauser zu Schauplatzen von Massakem werden - Die Privatisierung des Krieges in der Modeme. SZ am Wochenende,Nr. 225,29./30 September 2002, p. 1 [Pay 971 Payne, Kieth: Diplomatic and Dissuasive Options (Counter-Proliferation, Treaty based Constraints, Deterrence and Coercion). In: Ranger (Ed.): Extended Air Defence and the Long-Range Missile Threat. Bailrigg Memorandum 30, 1997: Lanchester University, Centre for Defence and International Security Studies, pp. 3843 [Ric 021 Rice Condoleezza, : A Balance of Power that Favors Freedom. Foreign Policy Agenda. Washington 2002: U S . Department of State, pp. 5-9
509 [RSG 181 Research Study Group 18 on Stable Defence: Stable Defence - Final Report. Panel 7 on The Defence Applications of Operational Research. Brussels 1995: North Atlantic Treaty Organization - Defence Research Group [Rum 981 Rumsfeld, Donald H. et al: Report of the Commission to Assess the Ballistic Missile Threat to the United States. Washington 1998 httu ://www.fas.org/irp/threat/missiles/rumsfeld/index.html [Sch 011 Schilling, Walter: Die Proliferation von ballistischen Raketen und Massenvemichtungswaffen. Europuische Sicherheit Nr. 5 , Mai 2001, pp. 49-5 1 [Ste 031 Steinkamm, Armin: Der Irak-Krieg - auch volkerrechtlich eine neue Dimension. Neue Ziiricher Zeitung Nr. 112, Freitag, 16. Mai 2003 [Tsy 971 Tsygicko, Vitali N.: A Model of Defense Sufficiency for Estimate of Stability in a Multipolar World. In: Proceedings of the International Seminar on Nuclear War and Global Emergencies, 22“d Session (Goebel, Ed.). Singapore 1997: World Scientific, pp. 168-171 [Will 001 Wilkening, Dean A,: Ballistic-Missiles Defence and Strategic Stability. AdeZphi Paper 334, London 2000: The International Institute for Strategic Studies
FOOTNOTES
‘ Extended version of an invited paper presented at the HSS-Workshop ,,Anticipatory Defense: Basic Principles, Regional Priorities; Military Implications”, Kreuth, 27-28 May 2003 ‘I A well known and easily understood example of an analyhcal model of this kind is the second (square) law of Lanchester. It represents the closed-form solution of a system of two coupled differential equations describing the attrition suffered in a firefight between two homogeneous military units. By introducing victory conditions, the solution illustrates the role of numerical superiority in battles between symmetric opponents which, in turn,underscores the importance of maneuver to concentrate forces even though maneuver or movement of units is not modeled by the differential equations. Similarly, by properly rearranging the variables of the closed-form solution, and entering empirical values about breakpoints for attacker and defender and the relative defense advantage, one obtains analytical proof for the well known rule that, in order to be successful, a direct attack of defense positions requires that the initial attackerdefender force ratio is at least 3:l(see [Hub 801, p. 146). “‘ Note that verbal analysis methods are not necessarily the same as soft analysis methods which are based on expert opinion, judgment, and interaction obtained directly or through role playing. While soft tools may be sufficient to provide an answer in some cases, most often they provide input for quantitative analysis tools including analytical models (see [CBP 021, p. 188). I” NATO analysts concluded that the vast numerical superiority and force structure assured WP forces a probability of success when attacking NATO’s fairly static “Forward Defense” that exceeded the high operationaVstrategicrisk threshold (P* = 0.9) of the traditionally risk-averse Soviet forces. On the other hand, Soviet analysts interpreted the numerical inferiority of the heavy-amor ground forces of NATO in conjunction with the small operational depth of NATO territory as an indication that NATO was preparing for a preemptive attack in a serious crisis. In their assessment, surprise and moving forward to gain the operational depth for an efficient employment of armor against a numerically superior enemy, east of the demarcation line, were the essential elements of “Forward Defense”. Thus, a high degree of crisis instability has apparentIy persisted during the Cold War because the Soviets had come to the conclusion that they must be able to preempt NATO’s presumed preemption in a serious crisis (statement of a Soviet participant at a 1986 Pugwash meeting in Stamberg). ” The reader is referred to Huber et al. [HFL 991 for an example of estimating - in response to an assessment of the security risk associated with NATO enlargement by Professor Vitaly Tsygichko of the Russian Academy of Science [Tsy 971 - the relative degree to which Russia’s national security may be impaired by regional threat potentials along the periphery of its territory.
510 "' The security criterion is appropriate in situations when there is no evidence to assume that the military risk aversion of the potential opponent in question is high. The sufficiency criterion is adequate if there is overwhelming evidence that the opponent in question is highly reluctant to take military risks. Huber and Schindler have shown that it is impossible to satisfy every party's security requirements in a multi-state international system on the basis of the security criterion if the situation is characterized by distrust among parties. However, military stability of a multi-state international system is entirely feasible on the basis of the sufficiency criterion especially when nonmilitary relationships among the risk-averse states are cooperative [HuS 931. P,= 0 for all parties characterizes an international system which has overcome the use of military force for settling conflicts between them, because the political, social and economic root causes and motivations for going to war have disappeared, such as in North America and the regions of Northern and Western Europe. vii For example, arms control suffers from the difficulties of achieving agreement among potential supplier states on technology limitations. These difficulties are related to: economic competition and differences in political goals and interests; problem of enforcing compliance with agreed-upon limitations if rewards of non-compliance exceed possible sanctions by some significant margin; and the declining efficacy of controls as the technologies spread beyond the group of signatories to the acontrol agreement [Pay 971. n'' Deterrence is considered to be inherently unreliable vis-A-vis proliferating states in the post-Cold War era. It is difficult to establish reliable rules for deterrence with governments of proliferating states whose cultural and social environment are not well understood in most cases, and whose behavior is evidence of their unfamiliarity with the essential prerequisites of effective deterrence which emerged during the Cold War: rational behavior of antagonists; mutual understanding of motives; effective communication between antagonists; credibility of threats [Pay 971. The NNS does not distinguish between preemptive and preventive use of force. Rather, it extends the meaning of preemption to encompass military action "even ifuncertainty remains as to time and place of attack', i.e. preventive use of force. Because of its possible implications for international order, critics consider this aspect of the NNS as worrisome [LAR 021. ' The "Zentrum fuer Analysen and Studien der Bundeswehr" (ZAS - Center for Studies and Analyses of the Bundeswebr) was commissioned by the German Ministry of Defense in 1999 to analyze what type of military would required in the 21'' Century. It was supported by experts of the relevant scientific fields as well as industry and the military. A synopsis of the findings is available from ZAS. See also the treatise by Carl von Clausewitz on the aims a belligerent party adopts, and the Iesources it employs [Cla 84, pp.585-5943. xii This is very likely what German Defense Minister Struck had in mind when he explained Germany's new defense policy guidelines by stating that Germany is also defended in the Hindu Kush. '"' Among others, this has been the case in Somalia, Bosnia, Ruanda, and Haiti, where interventions were mandated by the UN Security Council upon request by the USA and others under Article 39 of the UN Charter, because the situations there was considered to threaten international peace. Even though the situation in Kosovo was similar, the legal controversy surrounding the NATO intervention in 1998 was due to the fact that it took place outside the UN framework, because Russia and China threatened to veto a resolution for military intervention. While the USA and UK took the position that the intervention was justified because the Security Council had acknowledged that international peace was threatened [Kuen 991. Unless armed with WMD-warheads, BM are not very effective weapons. Therefore, production and storage facilities for WMD-warheads are high priority targets for preventive attack. The fust and only successful operation of this kind in history was carried out in 1981 by Israel on the Osiraq reactor nearing completion at the Tuwaitta nuclear research center near Baghdad. Camed out with devastating precision by 14 attack aircraft, the raid had been carefully planned and rehearsed for some time [Arm 931. The international reaction at the time was quite negative. However, today one wonders what course history would have taken had this raid failed as Iraq would have been in possession of a few nuclear weapons when it occupied Kuwait in 1991. '"The borderline between addressing countermeasures and root causes is quite fuzzy. As was pointed out before, preemptive military intervention as a countermeasure includes both the neutralization of the threat and the provision of security and technical and financial assistance for rebuilding the infrastructure of the target countries. In practical terms, shifting priorities might imply reducing the military presence in favor of nonmilitary assistance as the security situation in the region improves. xvi It protects states and governments, even those that deprive their citizens of elementary human rights.
51 1
In fact, Steinkamm suggests that the United States may have fought the war against Iraq as a first step in advancing a new international law that eventually recognizes U.S. convictions about the legitimacy of defensive intervention. xvii
REPORT ON THE ACTIVITIES OF THE DESERTIFICATION PMP ANDREW WARREN University College of London, London, U.K. The WFS Desertification PMP at the Erice meeting in 2003, by including Sicilian scientists, became the prototype of a WFS Regional Resources Commission for Sicily. The meeting was the second of this prototype Commission, the first having taken place in Rome in June 2003. Both meetings focused on the preparation of a project. The outline of this project, therefore, is the substance of the report of the PMP for 2003.
A PROJECT TO PROVIDE BETTER INFORMATION FOR MANAGING DESERTIFICATION IN SICILY THE PROBLEM There is evidence for actual and potential loss of income, biodiversity and amenity, in short, of unsustainable development in Sicily that is caused by the loss, destruction, impoverishment of soils, the salinisation of waters and soils and the loss of vegetative cover (among other related problems). This is occurring, and may, probably will, accelerate in the face of climatic and economic change. ISSUES IN THE DESIGN OF AN APPROPRIATE PROJECT Sicily has a complex environment: geologically, climatically, biologically, economically, and culturally. The meeting heard an introductory lecture on these issues from Professor Carmello Dazzi. But we were building on many strengths: an established scientific community, which has already begun to address many of these problems, some good data bases, on soils, vegetation, erosion (among other things). In short: Sicily is an excellent place for the first regional resource commission on Desertification. The need is to bring these into an integrated, widely available, easily analysed format: Which supports diverse modelling approaches, aids analysis and synthesis, overcomes scaling issues; Within which relationships (environmental and economic processes) are properly understood; Which is focused on problems as seen by farmers, environmental groups, and the administration; And which supports decision making. THE PROPOSAL The proposal consists of three essential, closely connected elements:
512
513
Element I. Databases This will be built by collating and integrating existing databases from Sicily with data from new and existing remote sensing. The relevant software has been improving very quickly, as has the range and precision of data available from remote sensing, the ability to integrate modelling, GPS and GIs, and systems of Internet delivery. In the spirit of Erice, these data will be available in a transparent system that will be made publicly available, with powerful software to interrogate them. The system will preferably be dispersed throughout Sicily, encouraging participation and ownership, and will be available through the internet to land users, NGOs, government organisations, University, State and National Governments, in other words, to anyone interested in the Sicilian environment and its management. This is an extremely important and innovative part of the proposal. It is an essential precursor to the third element (see later). It could be the basis for planning in the other WFS Regional Resource Commissions for Sicily. We are fortunate that there has already been experimentation with this approach for Sicily, which provides many lessons about the choices needed to improve information systems of this kind. Systems like these have very great strengths. Relationships between different kinds of data can be subject to almost infinite kinds of analysis. These analyses can reveal areas that seem more damaged, or vulnerable; they allow Island-wide comparisons; and so on. But they do not provide a full understanding of the interrelations and processes that have created the patterns they show. One of their roles is to provide hypotheses about the environmental processes that underlie the patterns they reveal. Sound prescriptions need a deeper understanding of these processes. Furthermore, without more control, IT systems may be providing data that are irrelevant to problems or not providing data that could be relevant. Hence two other elements are needed: Element 11: Site Studies These are analyses of processes at a finer scale. They are expensive, and yield reliable results in years, not months. There cannot be many of them. To be useful in controlling the management of the Information Systems of Element I, and in order not to postpone environmental prescriptions, they need to begin at an early stage. We propose about four sites, selected from a prioritised list, chosen to represent the process that creates problems, and of sizes and shapes appropriate to those problems (salinisation, soil erosion, soil deterioration.. .). The data and models developed for the processes at these sites, when suitably quality-controlled, will be incorporated in the larger database and made publicly available, to form the basis of debates about evaluation and management. The site studies will include measurements of soil, hydrology, sediment-transport, ecological, meteorological, and socio-economic processes. As with the IT element, we are on a cusp o f many new developments for site studies, as in instrumentation, sampling, socio-economic modelling and so on. But scientific enthusiasm about site studies, like that about databases, needs to be kept in check. A third element is vital and must be integral.
514
Element 111. Consultation Local organisations, public and private, and government bodies will contribute to a management panel that will make decisions about the foci of the project, and will provide input on the choice of data in the database, about the construction of hypotheses and models, and, ultimately about the prescriptions. The element will also include studies of the legal, sociological and economic means for moving Sicily towards greater sustainability. It will develop a monitoring system for environmental problems, which will allow early warning, and the design of better response systems. PARTICIPANTS Soil, hydrological, remote-sensing scientists in the Universities of Palermo, Catania and Messina (we have already met many of these, either in Rome or in Erice). These include Professors Dazzi, Rossi, Benffatello, Raimondi, Pumo, Santoro; Representatives of agricultural and environmental NGOs; and of govemment departments associated with these issues (as yet not identified); Existing members of the WFS Desertification PMP (and others co-opted according to need). In Erice in 2003 were Paul Bartell (USAID), Gray Tappan (USGS), Larry Tieszen (USGS), Andrew Warren (University College London), Aaron Yair (Hebrew University of Jerusalem). TIME FRAME Element I to begin on January 1 2004, with the production of a publicly available, integrated data-base system by April 2004, ready for a meeting of representatives of all elements to review the progress of the integrated data system, debate priorities, chose study sites, and approaches. Implementation of study sites by December 2004. Twice yearly meetings of representatives from all elements to review progress and redirect the next phase. Prescriptions becoming available from December 2006. Major project review, December 2007. We believe we have outlined a prototype for many other regional resource commissions in the desertification area. It uses state-of-the art information and monitoring systems, and the wide, informed consultation that these allow.
LIST OF PARTICIPANTS Paul Bartell [email protected] Gugliemo Benfratello Geufra@,idra.uni& Carmello Dazzi dazzi@,unipa.it Dominic0 Pumo [email protected] Salvatore Rainmondi sraimondiainipait Guiseppe Rossi grossi@,dica.unict.it
515
Mario Santoro [email protected] Gray Tappan tatwan@,usgs.gov Larry Tieszen tieszen(iiusgs.gov Andrew Warren [email protected] Aaron Yair [email protected]
REPORT OF PMP ON ENDOCRINE DISRUPTING CHEMICALS STEFAN0 PARMIGIANI University of Parma, Italy Other members of the panel in attendance: Lou Guillette, Pete Myers, Shanna Swan, Fred vom Saal The PMP on Endocrine Disrupting Chemicals met during the session on Planetary Emergencies in Erice, August, 2003. This was the first meeting for this new panel. PURPOSE OF THE PMP There are contaminants present in the environment that can disrupt the functioning of critical regulatory molecules required for normal development of the brain and other organs, with the result that permanent disruption can occur even at very low levels of exposure. This poses a threat to the health of individuals, and destabilization of populations of wildlife and humans throughout the world. FUTURE ACTIVITIES OF THE PANEL During the next year the panel proposes two workshops: 1. The panel proposed a workshop relating to mechanisms of endocrine disruption to be held in May, 2004. The topics covered at the workshop will be: Epigenetic modification of gene activity by EDCs. Different effects of exposure to EDCs on gene activity during and after development. Consequences of fetal exposure to EDCs for adult disease and abnormal organ function.
2. A second workshop was also proposed for August 2004. This workshop would bring together a group of scientists from Italy and members of the PMP to develop a collaborative research program. The aim of the research would be to determine the health of people and wildlife living in a habitat in Italy where there is high exposure to EDCs in comparison to similar species of people and wildlife in a habitat with much lower exposures, since little research on EDCs has been conducted on people or wildlife in Italy.
516
14. LONG-TERM STEWARDSHIP OF HAZARDOUS MATERIAL WORKSHOP
This page intentionally left blank
MONITORING AND STEWARDSHIP OF LEGACY NUCLEAR AND HAZARDOUS WASTE SITES WORKSHOP STEPHEN J. KOWALL Idaho National Engineering and Environmental laboratory, Idaho Falls, USA LORNE G. EVERETT Chancellor, Lakehead University, Thunder Bay, Canada PROBLEM STATEMENT With the demise of the Soviet Union and declassification of cold war records, the scale of the nuclear weapons waste legacy in the United States was determined to be enormous. Even after cleanup of the most hazardous materials, government stewards will still be responsible for ensuring protection of environmental resources and humans from residual contamination that cannot be dealt with because of technology challenges or costs. Estimates of the amount of nuclear and hazardous material remaining after cleanup of this legacy exceed over 75 million cubic meters of contaminated soils and 1.8 billion cubic meters of contaminated waters. The scale of similar legacies in the Former Soviet Union, the Russian Federation, Europe and Asia are still being evaluated, but probably dwarf the US numbers. The scale of the commercial industrial problem is also huge. The monitoring and stewardship of this legacy is an intractable problem given the current state of regulations, and the state of science and technology in the 21'' Century. This mortgage to future generations will test our concepts of sustainable development. Experts from several countries assessed the problem of how the state-of-the-practice compares with state-of-art knowledge. They have made the following recommendations to the international community on how to develop and share monitoring and stewardship science and technology to address this daunting legacy. SPECIAL SESSION I Stakeholder Involvement in Stewardship-Recommendations Achieving Stewardship and Contributing to a Sustainable Stakeholder/Scientist Involvement Moderator: Elizabeth Hocking, Argonne National Laboratory
1. 2.
3.
Society through
Mutual trust among government and public stakeholders is necessary in processes developing stewardship policies and projects. Scientists contribute to developing trust when they are credible, acknowledge uncertainty, and communicate effectively in ways understood by all stakeholders. Stewardship processes must be sensitive to cultural traditions, governance systems, and differing levels of stakeholder education and experience that could inhibit
519
520
4.
5.
6.
meaningful participation; stakeholder capacity must be developed to support meaningful participation in the stewardship process. Stakeholder involvement must begin early in the stewardship process to build trust in relationships. Rules of engagement must clarify who makes decisions, how they are made, what limits constrain the stewardship process, and the goal of the stewardship process. Stewardship processes affecting sustainability of future generations must take intergenerational equity into account.
SPECIAL SESSION I1 Containment of Legacy Wastes During Stewardship-Recommendations Near-Surface Containment of Legacv Wastes Requiring,Long-Term Stewardship Moderator: James Clarke, Vanderbilt University 1.
Development of arrangements for long-term stewardship may benefit more from application of goal-based regulations than from regulation on the basis of fixed technical prescription (i.e., allows engineers to take advantage of state-of-art developments and shared experience and it relieved rule/standard-makers of liability for failure of facilities that comply with prescriptions). 2. Need for near source monitoring and source term diagnostics (risk vs time). 3. Subsurface science needs. Monitoring the condition of the containment system (health of the system). 4. 5. Design the system to accommodate monitoring now and in the future. Ability to monitor should be a design requirement. 6. Arrangements LTS will necessarily depend upon monitoring. 7. We need to end our sole reliance on groundwater monitoring as an indicator of system performance. 8. Design should accommodate/consider the potential for failure of critical system components. 9. Definition of “failure” consequences and responses. 10. Time horizonslmilestones, where are we? SPECIAL SESSION I11 Monitoring of Legacy Wastes and Burial Sites-Recommendations Ensuring Monitoring Svstem will Derform as Designed Moderator: Andrey Rybalchenko, FGUP VNIPI Promtechnologii, Russia
1. Address Component Failures Before System-Wide Failures 1.1 Monitoring of local legacy waste sites and burial sites (repositories) must include control observations of the characteristics of hypothetical failures.
521 1.2 The choice of characteristics for observations, networks and equipments must be
carried out using analysis of event-failures and mathematical models of the behavior of waste, environments and the geological formation. 1.3 The design of local legacy waste sites and burial sites (repository) must include all essential equipment, buildings, staff, personal and cost evaluations.
2. Reduce life cvcle costs 2.1 Maximum volume of observations, as part of the monitoring system, must be camed out during the construction and operational phase and a few years after shutting down the sites. 2.2 Volume of observation may be reduced after obtaining observation and investigation data from the first years of operation and after verification of the models of waste behavior in the geologic formation and the environment. 2.3 Monitoring of local legacy waste sites and burial sites after shut down and after a limited time of observation must be forwarded to a geological service or analogous organizations for observations as a part of a federal (regional) monitoring program. This will reduce the life cycle expense for monitoring. 3. Outimize data collection 3.1 Data for local legacy waste sites and burial sites (repository) must be organized in a computerized database supplementing all phases of investigation, design, construction, operational and shut down. 3.2 Database must be used for the substantiation of models of waste behavior (migration model), geological formation and the environment. Database and models must be used for safety evaluation, optimization of operational and monitoring observations. 3.3 Database must include the results of modeling and prediction results obtained during site operations. 3.4 Database must be accessible to stakeholders. 4. Prediction of performance 4.1 Predictions of consequences of waste burial and transformations in the environment
and the geological formation must be carried out in all phases of the critical operational and shut down of sites. 4.2 Predictions made during the preliminary investigation help to optimize the volume and methods of investigation. 4.3 Predictions made during design provide the basis for technical decisions for site network observations and safety criteria. Prediction results are necessary to satisfy stakeholders. 4.4 Optimization must be camed out using modeling techniques.
ACHIEVING STEWARDSHIP AND CONTRIBUTING TO A SUSTAINABLE SOCIETY THROUGH STAKEHOLDER INVOLVEMENT ELIZABETH K. HOCKING Environmental Assessment Division Argonne National Laboratory, Washington DC, USA INTRODUCTION Many sites around the world are contaminated with radioactive or hazardous residuals. They are contaminated because (1) they were chosen as disposal sites, or (2) they became contaminated through use but cannot be completely remediated for economic or technological reasons. Stewardship of these contaminated sites will be required as long as the residuals pose a potential harm to human health or the environment. The tasks of stewardship are to allow access to contaminated lands and resources only for uses that have been approved on the basis of a risk assessment, and to protect surrounding communities from exposure to contaminants. The goal of stewardship should be to accomplish these tasks while contributing to a sustainable society by allowing approved land uses. In general, there is a bias against letting land lie unused. Property law is largely built on the basis of the transferability of land to keep it available for appropriate use. Even though some uses would be constrained for land with residual contamination, the land may be approved for other appropriate uses that can contribute to sustainability. For example, the Waste Isolation Pilot Plant in the southwestern part of the United States is the burial site for transuranic radioactive waste. The waste is placed 2,150 feet underground in a 2,000 foot-thick salt formation. Resource mining on the site is foreclosed; however, the surface of the disposal site will be used for cattle grazing when all the waste has been placed in the repository. Other sites with residual contaminationin the United States are being used as wildlife refuges. Using land under stewardship for its approved purposes may even forestall unapproved uses because land users will generally protect their interest in the land. Working toward achieving the goal and tasks of stewardship requires meaningll involvement by the stakeholders affected by the contaminated lands. Meaningful stakeholder involvement requires identifylng and engaging the right stakeholders, establishing rules of engagement for the stakeholder process, building and maintaining trust, and protecting intergenerationalequity. Although the following discussion is oriented toward establishing stewardship programs for existing contaminated sites, many of the principles and suggestions also apply to activities such as selecting a disposal site or establishing a stewardship regulation or policy. IDENTIFYING AND ENGAGING THE RIGHT STAKEHOLDERS Involving stakeholders in a meaningful way in stewardship decisions first requires that the appropriate stakeholders be identified. The appropriate stakeholders are those
522
523 who represent the diverse and divergent interests affected by the site or activity, have standing in the affected area, and agree to abide by the rules of engagement established for the stakeholderprocess. The integrity of the stakeholder process depends significantly on the willingness of decision makers to engage stakeholders with viewpoints that vary from their own. However, the stakeholders must truly and legitimately represent the interests affected by the activity or site and therefore have the standing within the affected community to participate in the stakeholder process. Identifying the right mix of stakeholders requires understanding the concerns, issues, and objectives associated with the goal of the stewardship process. When the goal of the stewardship process is clearly described and adhered to, it will be easier for a facilitator and the stakeholders themselves to identify who is a legitimate stakeholder. If the goal of the stewardship process is to design a stewardship plan for a specific site that has been formerly contaminated through nuclear weapons development, stakeholders whose only objectives are to ban the use of such weapons might not be the right stakeholders for this site-specifictask. Once the stewardship process goal is clearly defined and the right stakeholders are identified, rules are required to effectively engage them. ESTABLISHING AND ADHERING TO RULES OF ENGAGEMENT Rules of engagement for the stewardship process will reduce conflict and confusion and contribute to more effective stakeholder involvement. As with any group process, the basic rules of honest and open communication and respect among stakeholders are vitally important. Adhering to those basic rules will be much easier if all stakeholders clearly understand the limitations that apply to the process, who is responsible for making the final decision, how decisions are made, and the goal of the process. Most stewardship processes will have to function within some limits that are beyond their control. These limits could be legal, financial, or temporal in nature. Because they impact the scope of the stewardship process, limitations must be made clear from the very outset so that all stakeholders understand them and the reasons behind them. Two of the most important limits relate to stewardship decisions. Stakeholders must have a clear understanding of who is responsible for making the final decision and how decisions are made within the group process. The decision maker for some stewardship processes will be mandated by a law or regulation; in most cases, however, who makes the final decision will depend upon the nature of the goal of the process. In either case, the identity of the final decision maker must be made clear to avoid any stakeholder misconceptions that they are the ultimate decision makers. The procedure for making any decisions within the stewardship process will also need to be defined before the process begins. Voting may be used in some stewardship processes; in others, the procedures might entail developing a general consensus of stakeholders’ opinions or just getting a sense of stakeholder attitudes. Adhering to the accepted rules of engagement can reduce the conflict that arises in most group processes and is especially important in a stewardship process because it often begins with a trust deficit.
524
BUILDING AND MAINTAINING TRUST
The obstacles to building and maintaining trust among stakeholders in the stewardship process arise from an entangled environment of mistrust, frustration, and uncertainty. Much has been written about the mistrust directed at government officials by non-government stakeholders. The mistrust may be mutual. Government officials may see stakeholders as intruders, irritants, and problem creators. Frustration for all stakeholders can stem from the seemingly intractable nature of the problem under consideration. Mistrust and frustration can also arise from the need to chart difficult but necessary choices in a sea of uncertainty. There may be considerable uncertainty about the characterization of the site being considered for stewardship. There may be additional uncertainty about the longevity and reliability of the planned contaminant containment systems, the monitoring systems expected to detect containment system failure, the land use controls expected to ensure that land is only used for approved purposes, and the system to provide information about the site to succeeding generations. The best course of action for building and maintaining trust in such an environment is acknowledging uncertainty, communicating information, and building stakeholder capacity. These actions must continue throughout the stakeholder process. In the face of uncertainty, some people may unrealistically react by rehsing to acknowledge any uncertainty and hold rigidly to their beliefs that everything that needs to be known about the stewardship of a site is known. Other people, in the same atmosphere of uncertainty, may unrealistically reject what is known. Acknowledging uncertainties and developing plans for how to act in light of them can help reduce their negative impact on the stewardship process. The trust that is necessary for stakeholders to accept uncertainty and certainties stems from appropriately communicating information. Information must be objectively presented, and it must be presented in forms that are understandable to stakeholders. For example, highly technical and complex scientific data will be desired and appreciated by some stakeholders. Stakeholders who do not have a scientific background may desire and benefit more from visual representations that portray information such as geologic formations, contaminant intensity and migration, or containment system design. Conveying information in user-compatible formats can build trust and enhance stakeholder participation. The capacity of stakeholders to participate meaningfully in the stewardship process is often complicated by the fact that they come to the stewardship process with widely varying degrees of formal and informal education and experience with group processes. The prevailing governance or cultural system may also impede stakeholders from fully participating if the system inhibits questioning authority figures. Training in the group process, the fundamentals of stewardship, and communication techniques can help level the playing field among stakeholders. A well-trained and experienced group facilitator can ensure that the playing field remains as level as possible during the stewardship process and that trust is maintained - even though it may be strained at times. The information communication that is so important to building and maintaining trust must continue through time because stewardship will be required for as long as
525 residual contaminants pose a potential threat to human health or the environment. Stewardship may be required for several generations, and the interests of future generations should be taken into account when stewardship programs or decisions are being developed. PROTECTING INTERGENERATIONAL EQUITY Intergenerational equity is defined here as the fairness of access to resources across generations. Resources can be natural as well as cultural. Natural resources include sensitive ecosystems, water bodies, minerals, and fossil fuels. Cultural resources include things such as sites, buildings, objects, plants, graves, and rock carvings that have cultural, historical, or archeological significance. Because stewardship will limit some use of land and its resources to protect against the release of potentially h a d 1 contaminants, future generations will have restricted access to those lands and resources. A second aspect of intergenerational equity and stewardship is that the risks and costs of stewardship programs devised by the present generation are borne, knowingly or unknowingly, by future generations. Stewardship will always impose some resource use restrictions and some cost and risk obligations on future generations. The stewardship process should incorporate principles for intergenerational decision making to minimize these impositions. In 1994, the U.S. Department of Energy requested advice from the National Academy of Public Administration on how it could integrate a fair, intergenerational balancing of the risks, costs, and benefits associated with its decisions into its decision making processes. In response, the Academy identified the following four principles for intergenerational decision making: Trustee Principle: Every generation has obligations as trustee to protect the interests of future generations. Sustainability Principle: No generation should deprive future generations of the opportunity for a quality of life comparable to its own. Chain of Obligation Principle: Each generation’s primary obligation is to provide for the needs of the living and next succeeding generations. Precautionary Principle: Actions that pose a realistic threat of irreversible harm or catastrophic consequences should not be pursued unless there is some compelling countervailing need to benefit either current or future generations.’ Stakeholders need to keep these principles in mind when participating in the stewardship process to ensure that future generations have access to the information needed to understand and deal with the obligations passed on to them.
’
National Academy of Public Administration, “Deciding For The Future: Balancing Risks, Costs, and Benefits Fairly Across Generations,” April 1995.
Work supported by the U.S. Department of Energy under contract W-31-109-Eng-38. The submitted manuscript has been created by the University of Chicago as Operator of Argonne National Laboratory (“Argonne”) under contract No. W-3 1-109-ENG-38 with the U.S. Department of Energy. The US.Government retains for itself, and others acting on its behalf, a paid-up, nonexclusive, irrevocable worldwide license in said article to reproduce, prepare derivative works, distribute copies to the public, and perform publicly and display publicly, by or on behalf of the Government.
RADIOACTIVE WASTE OF DEFENSEACTIVITIES IN TKE 20"'"CENTURY - HANDLING AND MANAGEMENT A.I. RYBALCHENKO All-Russia Designing and Research Institute of Production Engineering Moscow, Russia INTRODUCTION Implementation of defense programs on the production of nuclear materials in the second half of the 20" century resulted in the formation of large radioactive waste quantities. In the beginning radioactive waste was handled similarly to industrial waste, i.e. the waste was dumped into the surface water basins, trenches, and collector pits. The practice gave rise to the accumulation of hazardous radioactive waste and, consequently, to the increased radioactive waste impact on human beings, animal life, and vegetation. Governments in countries of the radioactive waste origin are forced to implement full-scale programs aimed at the prevention of harmful impact of the defense waste. To do that they have to provide for the corresponding expense items in the State budget thus diverting vast funds from the other important social issues such as eradication of poverty, health care, education, etc. This makes a political issue out of handling the radioactive heritage of the 20" century, which is debated in parliaments, in press, and in public environmental organizations. The issue is the most acute for the Russian Federation and for the new FSU states who appeared to be in possession of the defense radioactive waste. In the process of changing the forms of ownership, new enterprises evolved out of the former defense plants rejecting the radioactive heritage. Other defense enterprises are out of business and the remaining radioactive waste has to be managed by the State or it would belong to nobody. Complications in the settlement of the former defense radioactive waste handling issues are associated with the following: with difficulties in the economic development of the new States; with the ongoing creation of the waste handling legislation; with the broad involvement of public in the discussions; with the use of difficulties in resolving the radioactive waste issues by the political parties in opposition. In this context, the paper presents the discussion of the issues of handling radioactive waste of the former defense activities in the USSR as applied to the Russian Federation being the major radioactive waste owner. This paper does not deal with handling nuclear materials that could be used in industry and therefore are not considered as waste including highly enriched uranium and plutonium; with the remediation of territories contaminated as a result of the Chemobyl accident; with the release of waste into environment. The following areas of activity are proposed for discussion to resolve the issue of the accumulated radioactive waste generated earlier: Waste inventory; 0 Assessment of the potential waste hazard; Practical measures on the harmful waste impact prevention; Funding of the waste handling activities; Legal basis of waste handling; 0 Release of the information to the public and population.
526
527 WASTE INVENTORY Data on the quantity, composition, waste form, and waste location are necessary to make an optimum decision on waste handling; the assessment of the potential waste hazards; the justification and development of the projects for practical measures and on the determination of the amounts involved. Waste generated was inventoried throughout the entire production period of nuclear materials in the USSR. The data were accumulated in the authorized agencies under the Ministry of the Atomic Energy of the Russian Federation (former Ministry of the Medium Machinery Construction of the USSR). During recent years, a number of international projects were arranged for the creation of a waste cadastre and database for the Russian Federation using modem computer technology and posting data on the Internet. It is worth mentioning the ISTC projects “Radleg” #245 and “Radinfo” #2097, the latter being the development of the “Radleg” project. The database contains data on the USSR defense activity waste as well, however there is no clear distinction between the USSR defense waste and the waste generated later. In September 2003 the database-associated conference “Radioactive waste in the USSR and Russia” will be held in Moscow. Information on the waste cadastre and database may be obtained on the website http://www.kiae.ru/gis-radleghdex.htm. Basic sources of radioactive waste during the implementation of defense programs were uranium production and enrichment, fuel fabrication, nuclear reactors, and fuel reprocessing, fleet. Radioactive waste generated in the USSR during the uranium mining and radioactive ore processing is stored as solid waste in mine tailing repositories located both in the Russian Federation and in the new States of Kirghizia, Kazakhstan, Tajikistan, Ukraine and Uzbekistan. The activity of that waste is not high and is defined by natural radionuclides, the total waste volume being estimated at 600 - 700 mln tons. Management of liquid radioactive waste is extremely complicated. The total volume of liquid radioactive waste is more than 550 mln.m3. 50 mln.m3 are isolated in deep geological formations using deep well injection. Other waste is located in open surface ponds, basins and lakes. The majority of the radioactive waste inherited from the USSR (counted by activity) is located at the plutonium-producing enterprises. Among those are the Combine “Mayak” located in Oziorsk City, Chelyabinskaya Oblast; Siberian Chemical Combine of Seversk City, Tomskaya Oblast; Mining and Chemical Combine of Zhleznogorsk City, Krasnoyarsk Territory. Waste was generated in the form of solutions, which were stored in special facilities (tanks); in natural and engineered ponds and pits; and were injected into deep porous geologic formations (reservoir horizons). At Tomsk and Krasnoyarsk more than 90% of liquid radioactive waste was injected in deep porous geologic formations (reservoir horizons) through bore wells. These wastes are isolated from the environment. Gross activity of the accumulated waste was estimated at 1.7 billion Curie [ 11. Part of the liquid waste at the Combine “Mayak” was solidified, and all the enterprises under discussion also have radioactive waste of solid origin. The total quality of solid radioactive waste of Russia is about 180 million tons and 90% of it is mining and mill tailings.
528 In the Northern territories of the Russian Federation, some USSR Navy waste is located in the decommissioned submarines and in shore repositories. The major part of that waste is in solid form, however liquid radioactive waste repositories exist also. Radioactive substances from decommissioned nuclear facilities also belong to the defense radioactive waste. Those facilities are located in many research centers of the Russian Federation. Nevertheless, the soils, rocks, and construction elements contaminated by radioactive substances as a result of nuclear explosions do not belong in the category of radioactive waste. The issue of categorization and successive handling of that waste is yet to be solved. Territories have been contaminated by radioactive waste around open surface storages. Total contaminated areas are estimated at 452 km2 of which contaminated areas in the vicinity of" Majak" represent 94%. Equipment for the reprocessing of radioactive waste was developed in Russia and used very successfully. The total volume of reprocessed waste is about 150 million m3. The technologies of cementation, bituminization, converting into glass and others are applied. ASSESSMENT OF POTENTIAL WASTE HAZARDS Selection of a technique for handling the radioactive waste, the work schedule, and expenses are largely defined by the hazard that the waste represents to the population and environment. Obviously, the most hazardous waste and waste locations must be neutralized first of all. Different systems of criteria are proposed for the waste hazard assessment. Irradiation of the population induced by radioactive waste is the fundamental criterion for the waste hazard evaluation. The irradiation value is characterized by the collective dose and by the maximum individual dose of irradiation as obtained from a single member of the population. The extent of contamination of the ground surface, vegetation, and living creatures in the locations of the accumulated waste is used for the waste hazard evaluation as well, for it defines the irradiation dose of the population. On the basis of comparing the results of monitoring and measurements made in the locations of the accumulated waste, with the criteria-defined values, conclusions are made as to whether the waste is hazardous or safe. Numerical characteristics of the criteria or irradiation and contamination norms are established in the appropriate regulatory documents. At present the radioactive waste condition could be assumed as safe, due to the measures taken to prevent contact of the population with the waste which is surrounded by an assigned protective zone; and due to measures taken to prevent radioactivity migration across the protective zone borders. However the migration of radioactivity across the protective zone borders is possible in the future as the result of natural processes, elemental forces, and other unanticipated interference. In this case the criteria are used to deal with expected (predicted) values of the irradiation, with the moment of the irradiation occurrence in the future, and with the probability of the irradiation occurrence. For the disposal of waste in geological formations, a localization criterion is also used. The waste disposal is assumed to be safe in cases where the waste exists within the predetermined borders of the allocated range (allotment).
529
Thus far, the methodology is established for the safety assessment of locations of the accumulated waste, waste storage, andor disposal. Safety assessment assumes the following basic phases: Gathering and preparation of data on the waste and, if necessary, the investigation of waste, determination of the waste quantity, composition and physicochemical speciation; Determination of the conditions for the waste locations and connections with the environment (climatic, physical-geographic, geologic) and socialeconomic conditions of the waste disposal location; Determination of the territory contamination around the waste location, actual irradiation doses for the population; Substantiation of mathematical models for the formation of irradiation doses and for radionuclide migration from waste locations; Forecasting the behavior of waste in the future and the release of radioactivity from waste disposal locations based on the mathematical model studies (simulation), determination of characteristics of an anticipated impact; Analysis of the results obtained, formulating conclusions on the extent of the waste hazard or safetv. Utilization of such methodology for Russia shows the former defense waste accumulated at Combine “Mayak” to be the most hazardous. Radioactivity of the waste is rather high; a significant part of waste in the liquid state exists in open ponds and basins connected by underground water, and it contaminates adjacent locations by the ejection of aerosols. However the total activity of liquid waste is not the governing safety criterion. At the Siberian Chemical Combine in Seversk City, Tomskaya Oblast, the liquid radioactive waste was injected into the deep reservoir horizons [ 2 ] . Waste with gross activity several times as high as that existing in open lakes and ponds at Combine “Mayak” is localized within the predetermined borders of the geological medium and does not affect the population, animal life, and vegetation. According to the assessment data published by Russian specialists and international experts [3, 4, 51 injection of waste into the deep reservoir horizons is assumed to be safe. Injection of the liquid radioactive waste allowed to begin starting the closedown of open liquid radioactive waste storage (ponds and basins) at the Siberian Chemical Combine, created as a result of defense activities. The closedown of the open liquid radioactive waste basins is under way at the Mining and Chemical Combine of Zheleznogorsk City, Krasnoyarsk Territory where they have also injected liquid radioactive waste into the deep reservoir horizons. Potential hazards of waste generated as a result of defense activities depend not only on the gross radioactivity, but on the geographic position as well. In Middle Asia, Kirghizia, some low-level radioactive waste repositories are located in a dammed canyon. Destruction of the dam would cause a mud avalanche that would flood the settlements located lower down. Potential hazard of that waste in spite of the low radioactivity is rather high especially with regard to high seismic activity in the region.
530 PRACTICAL MEASURES ON THE HARMFUL WASTE IMPACT PREVENTION Practical measures of handling radioactive waste, formed as a result of the defense programs implementation, pursue the prevention of harmful waste impact on the population and environment. The goal could be achieved in various ways as follows: by converting radioactive elements into non-radioactive ones using nuclear technology; by converting all waste into solid, mineral-like forms followed by the disposal thereof in deep impermeable geologic formations; by dispatching waste into space. The ideal is supposed to be a waste location restored to its natural condition as it existed prior to the appearance of waste in the area - the “green meadow” concept. For a small waste volume and quantity this seems to be quite feasible. However, for the inherited defense waste, the task for upcoming decades should be treated as a fantasy. Therefore “green meadow” will be postponed for a long while and other approaches are used ensuring, first of all, the reliable protection of humans i%om irradiation by waste. The simplest way to prevent a harmful waste impact is to arrange protective measures in the accumulated waste locations, to enclose and guard waste, to forbid the admittance to the accumulated waste locations by unauthorized persons. Simultaneously, the waste storage area should be equipped with a network of environmental monitoring stations for the timely detection of leakage enabling the appropriate measures to be taken. Merely storing the waste within protective zone borders and setting up a physical protection system will not always prevent the waste impact on the environment. In the latter case, additional measures are taken aimed at waste localization as follows: the erection of additional dams, shelters, screens, etc. A similar approach is used at the Combine “Mayak” with regard to the open surface liquid radioactive waste storage places. At the Siberian Chemical Combine, and at the Mining and Chemical Combine, the closedown of open basins with liquid radioactive waste has been successfully performed by filling the basins with clayish rock and simultaneously sending the displaced liquid into the deep well injection reservoir horizons. In the repository location semi-solid waste remains - sludge covered with clayish rock and actually unmovable. Liquid radioactive waste disposal in porous geologic formations by injecting waste through deep bore wells in reservoir horizons allows the waste to be insulated from the environment and prevents penetration of the waste into the biological cycles. The technology described has been used in atomic industry enterprises of Russia for 40 years. Experience gained in deep well injection of liquid radioactive waste was used to create deep injection sites for non-radioactive waste at other enterprises. In Table 1 some characteristics are given of the waste disposal locations of liquid radioactive waste that have arisen as a result of implementation of the defense programs. Creation of deep injection sites was preceded by a special geologic investigation that illustrated the possibility of waste injection. According to the opinion of a number of experts, there is also a possibility of deep liquid waste injection in the vicinity of Combine “Mayak” in the Ural, however the appropriate decision has not been made for a number of reasons. That appeared to be one of the issues that led to a complicated environmental situation in the region of the Combine.
531 The ultimate stage in the management of radioactive waste is its emplacement as solid waste in impermeable geological formations, with a high degree of multibarrier isolation from the environment. Practical work on defense waste handling is performed under the projects developed after the study and justification of the principal possibilities of work and work safety are accomplished. The projects comprise technical decisions; the design of engineering systems and facilities; the justification of safety; cost estimates. The projects pass expert review and can be implemented as soon as they are approved. Table 1.: Deep repositories (Deep Well Injection Sites) of liquid radioactive waste at Minatom enternrises Injection Injection Removed Enterprises Type of waste depth, m start, waste year. volume, million m3 Siberian Chemical Combine liquid radioI 270-320 I 1963 I 42.0 314-386 active waste Mining and Chemical Combine liquid radioI 180-280 I 1967 I 6.1 active waste 355 - 500 State Scientific Center RF “NIIAR” (Scientific and active waste 1440 - 1550 Research Institute of Nuclear Reactors)
I
I I
I
I
FUNDING OF WASTE HANDLING ACTIVITIES Arrangements for funding the defense waste handling activities present a complex issue. The waste in question was generated in the time of the USSR - a State that no longer exists. As the assignee of the USSR, the Russian Federation must bear all the expenses of the waste handling, however under market economy conditions, with the State budget reduction and the private capital predominance, it appears difficult to allocate significant funds to waste handling. More and more it has to be done at the expense of other budget items, such as the eradication of poverty, health care, education, etc. Motions to increase the inherited waste-handling budget are not always seconded in parliaments and in government structures. Under the Russian Ministry of Atomic Energy, atomic industry enterprises have to look for funds to support safe conditions for radioactive waste produced in the USSR during the defense program implementation. That is accomplished by cutting back social programs, raising tariffs for NPP-generated electricity, increasing the cost of other goods manufactured by the atomic industry. However the funds raised that way are not sufficient for the ultimate solution to the inherited waste issue. In recent years, projects were prepared for international cooperation to handle the spent nuclear fuel of foreign NPPs. A significant part of the funds obtained from the payments for the import, storage, and subsequent fuel reprocessing was supposed to be used to solve the radioactive waste handling issue. Great assistance in resolving the radioactive waste issues is rendered to the Russian Federation by the European Union countries and the U.S.A.
I
532 A possibility of attracting private investments to solve the radioactive waste issue has also been considered. After the waste is removed and disposed of, the territories of the former waste location could be offered on preferential terms to the businessman who invests in the solution of the problem. LEGAL BASIS OF WASTE HANDLING All the work on waste handling should be supported by a system of Federal Laws and regulatory documents created on that basis. Due to the lack of laws regulating waste handling, all activities could be blocked by bureaucracy, court decisions, or public protests. Whereas the availability of laws clearly regulating the legal relationship between the subjects involved in the activity in question, would make it possible to implement optimum decisions on radioactive waste handling and to somehow solve the funding issues. The laws currently in force in the Russian Federation that allow effective regulating of the waste handling activities are as follows: Federal Law “On Atomic Energy” Federal Law “On Radiation Safety of the Population” Federal Law “On the Protection of the Environment” Federal Law “On Industrial and Consumer Waste” Federal Law “On the Interior of the Earth” Federal Law “On the Environmental Expert Review”. The regulatory documents developed on the basis of those laws are the following: “Radiation Safety Norms”; “Basic Sanitary Rules for Providing Radiation Safety” “Sanitary Rules for Radioactive Waste Handling; “Sanitary Rules and Technical Conditions for the Disposal of Liquid and Solid Radioactive Waste of Atomic Industry Enterprises”; and a number of others. Implementation of laws, norms, and rules is controlled by the agencies under the Ministry of Health Care of the Russian Federation, the Ministry of Natural Resources of the Russian Federation, by the State Atomic Inspection Authority, by the State Mining and Technical Inspection Authority within the scope of issues regulated by those agencies. Nevertheless a decision has been made on the development of a law that would govern legal relationships in the radioactive waste handling area. The development and adoption of the law has met considerable obstacles associated with lobbying the corporate interests by lawmakers. Nevertheless, work on the law is near completion.
RELEASE OF THE INFORMATION TO THE PUBLIC AND POPULATION Opinion of the public and population on the proposed large-scale projects could play a key part in the adoption or rejection of projects or other forms of waste handling activity. Under the democratization of the public and the development of the electoral system, politicians at different levels become, to a considerable extent, dependent on public opinion of the actions performed with radioactive waste and are forced to take that opinion into account, regardless of wrong it may be. Hence, the
533 work with the public and population during the implementation of radioactive waste handling projects is considered to be an important and specific form of activity. The information provided to the population and public organizations should objectively reflect the core of an issue - the condition of the environment, the projects or works planned in the waste handling area, the safety assessment made, etc. Along with that, the information should be comprehensible to an outsider and available for examination and verification by an independent researcher or organization that could be recruited upon request by the population or public organizations. The centers for public information within the enterprises of the atomic industry justified all hopes. In a widely available and convincing form they provide various information about the work of the enterprise, various reference literature, videos, exhibitions addressing the environmental impact and the health of the population. Implementation of the joint international research projects on the assessment of the impact of the Russian atomic industry enterprises is of great assistance. The authority of foreign agencies and experts in Russia is rather high. As an example, one could refer to the work conducted under the programs of the European Commission and the International Center for Applied System Analysis on the defense waste disposal practices of the Mining and Chemical Combine, Siberian Chemical Combine, Scientific and Research Institute of Nuclear Reactors in Dimitrovgrad City, Ylianovskaya Oblast [3,4, 51.
REFERENCES 1.
Bradlay D.J.: Behind the Nuclear Curtain: Management in the Former Soviet Union. Edited by Payson D.R. Battelle Press, Columbus, Ohio, 1997.
2.
Rybalchenko A.I. et al.: Deep Injection Disposal of Liquid Radioactive Waste in Russia. Battel Press, Columbus, Ohio. USA, 1998.
3.
Compton K.L.et al.: Deep Well Injection of Liquid Radioactive Waste at Krasnoyarsk-26. v 1, International Institute for Applied System Analysis, Laxenburg, Austria. 2000.
4.
Evaluation of the Radiological Impact Resulting from Injection Operations in Tomsk-7 and Krasnoyarsk-26. Final report, European Commission, EUR 18189 EN, 1999r.
5.
Measurements, Modeling of Migration and Possible Radiological Consequences at Deep-Well Injection Sites for Liquid Radioactive Waste in Russia. Final report. EUR 17626 EN, 1997.
INTERNATIONAL COOPERATION TO ADDRESS THE RADIOACTIVE LEGACY IN STATES OF THE FORMER SOVIET UNION
DAVID K. SMITH, RICHARD B. KNAPP, NINA D. ROSENBERG, ANDREW F.B. TOMPSON Lawrence Livermore National Laboratory Livermore, USA The end of the Cold War allows a comprehensive assessment of the nature and extent of the residual contamination derivative from the atomic defense and nuclear power enterprise in the former Soviet Union. The size of the problem is considerable; some 6.3 x lo7 TBq (6.4 x 10' m3) of radioactive waste from the Soviet Union weapons and power complex was produced throughout all stages of the nuclear fuel cycle. The resulting Contamination occurs at sites throughout the former Soviet Union where nuclear fuels were mined, milled, enriched, fabricated, and used in defense and power reactors. In addition, liquid radioactive wastes from nuclear reprocessing have been discharged to lakes, rivers, reservoirs and other surface impoundments; military and civilian naval reactor effluents were released to sea as well as stabilized on land. Finally, nuclear testing residuals from atmospheric and underground nuclear tests at the Semipalatinsk and Novaya Zemlya test sites and peaceful nuclear tests conducted throughout the area of the former Soviet Union pose risks to human health and the environment (Figure 1). Through a program of international scientific exchange, cooperative approaches to address these threats provide former Soviet scientists with expertise and technologies developed in the United States, Europe, and elsewhere to design comprehensive and long term remedial solutions. The role of the international community to address these challenges is essential because the emerging states of the former Soviet Union share common nuclear residuals that cross newly established national borders. In addition, the widespread post-Soviet radioactive contamination hampers economic recovery and - in some cases - poses proliferation concerns. Also important is the widespread perception throughout these countries that the Soviet nuclear legacy poses a grave threat to the human population. A new paradigm of "national security" encompasses more than the historical activities of nuclear weapon production, testing, and deterrence and now includes the environment, human and economic health, and the proliferation of weapons-of-mass destruction'. For these reasons the fall of the Soviet Union provides a new imperative and opportunity for systematic, comprehensive and interdisciplinary international efforts to begin to solve these important environmental problems. The environmental degradation from nuclear contamination affecting states of the former Soviet Union is a large topic, and a full description is outside the scope of this paper. A comprehensive overview of environmental concerns and radioactive waste production, inventories, and impacted sites is provided by others'2334.Portions of the summaries provided here are drawn from these works. Table 1 summarizes the current extent of radioactive contamination and state of waste management practice in the former Soviet Union'.
534
535
Table 1.: Summary of Radioactive Contamination in the former Soviet Union Source in Nuclear Fuel Cycle
Radioactive Contamination and Waste Management
Uranium mining and milling
Waste storage in tailings piles. Liquid waste stored in impoundments or discharged to the environment. Total activity is 3.7 x103 TBq. Liquid and solid waste stored at specific site facilities. Total activity is 1.48 x lo2 TBq.
Uranium conversion, enrichment and fuel fabrication Commercial nuclear power plants Commercial spent fuel Defense reactors Reprocessing wastes
Nuclear submarines Nuclear icebreakers and container ships Medical, research, and industrial sources
Liquid wastes stored on-site in tanks; solidification of liquid waste. Solid wastes stored on-site. Total activity is 1.5 x lo3 TBq (liquid concentrates). Stored at reactor sites. Total activity is 1.5 x 10'TBq. Cooling water discharge to lakes at Mayak Site. Liquid waste discharged to ponds, lakes and rivers. Widespread releases at Mayak to Techna River and Lake Karachai. Other releases at Tomsk-7 and Krasnoyarsk-26. Total activity is 2.1 x lo7 TBq (liquid wastes). Liquid and solid waste storage facilities; liquid waste discharged to sea. Total activity is 36 TBq. ~
Liquid and solid waste storage facilities; liquid waste discharged to sea. Total activity 2.0 x lo4 TBq. Stored at generation sites then shipped to treatment, solidification, or disposal facilities near major cities. Total activity is 7.4 x lo4TBq.
Problems associated with the residual contamination are many and have been exacerbated by the economic and political collapse of the Soviet Union2. These include: The majority of existing and newly generated radioactive waste is not being treated or stabilized. Engineered storage facilities are no longer considered safe. Inadequate storage capacity for wastes from nuclear power plants, nuclear icebreakers, and submarines. Lack of remedial solutions for liquid radioactive wastes, slurry storage, and liquid tank wastes. The absence of an automated system for accounting and control of radioactive wastes and stored materials. The lack of systematic and standardized procedures for radioactive waste management. The lack of regional repositories for radioactive wastes produced by the nuclear fuel cycle and nuclear power generation; existing repositories are aging or are at, or near, capacity.
536 The significant quantity of accumulated wastes and the inadequacy of treatment or storage options for these contaminants increase the risk of accidents and human exposures. In order to illustrate the complexity of these outstanding problems and the role of international approaches to their solution, three case studies involving remedial activities at different stages of the nuclear fuel cycle are described. Each of these studies represents different post-Soviet waste source terms and unique stages of the nuclear fuel cycle. In addition each relies upon multi-lateral international cooperation to effect a long-term solution. This paper extends the approach described by Tompson et al.' by equipping emerging post-Soviet republics - stressed by Cold War environmental degradation - with tools to promote regional stability as well as improve economic conditions, educational opportunities, and public health.
.Pieparhe Nuclear F 3 &Assemblies D - Repmcessing01 Spent Nudear Fuel E -Nuclear Research Centers
B RadiDactiie Waste BuW Sues C Enteiprises Remve Uranlurn and
F -Nuclear Fleet Bases & Plank
500 Slalule Milss
Figure 1. Nuclear waste and contamination sites in the former Soviet Union including the Semipalatinsk Test Site, Kazakhstan, the Ulba Metallurgical Plant, Kazakhstan, and Mailuu-Suu, Kyrgyzstan4.
MAILUU-SUU, KRYGYZSTAN Kyrgyzstan was an important source of uranium to the former Soviet Union since the mid-1940's. Currently there are no active uranium mines. However, 23 tailing deposits and 13 waste rock dumps from Soviet uranium mining operations are located
537 within the town of Mailuu-Suu in kyrgyzstan Nearly 2 x lo6 m3 of radioactive waste, equal to the quantity of processed ore, is prone to release through landslides to tributaries of the Syr-Darya River which is a main source of irrigation water for much of Central Asia. The effect of this debris on the health neighboring populations is not yet completely understood. Mailuu-Suu, with a population of 26,000 people, is situated in a narrow valley, prone to landslides, that drains the Mailuu-Suu River. More than 200 landslides have occurred at Mailuu-Suu over the past 30 years. The Mailuu-Suu River is a tributary of the Syr-Dana River which is the primary source of irrigation for the densely populated Fergana Valley and its agricultural lands which provide crops for neighboring parts of Kyrgyzstan, Tajikistan, and Uzbekistan. The large-scale release of radioactive tailings from landslides could severely contaminate the river and downstream areas. Naturally occurring radionuclides include 238Uand its daughters 226Ra,222Rn,230Thas well as '"Pb its daughter 210P07. 222Rnis also released as a gas from subaerial tailings piles. In 2002 a tailings landslide 1.2 km up-gradient of the town dammed the MailuuSuu River; flooding was avoided when the river incised and breached its blockage. While the Mailuu-Suu tailings piles are entirely within Kyrgyzstan, the environmental consequences from these spoils potentially affect neighboring countries. Ethnic tensions in the Fergana Valley are likely to be amplified by the compromise of the main source of surface water to the region. The government of Kyrghystan has acknowledged the regional environmental threats at Mailuu-Suu. Similar practices were used to mine uranium - and dispose of wastes - throughout the Soviet Union and the United States during the height of the Cold War. Tailings were typically accumulated on the banks of majors rivers where they were prone to episodic flooding5. In the late 1970's the U.S. Department ofEnergy established the Uranium Mill Tailings Remedial Action (UMTRA) program with responsibility for reducing levels of contamination in surface waters and groundwater at sites of uranium mining and milling in the United States. Tailings were either stabilized in place or excavated and relocated to remote disposal sites. The management experience and technical information gained from clean-up at the UMTRA sites in the United States will be invaluable in planning a remedial program in Mailuu-Suu. ULBA METALLURGICAL PLANT, KAZAKHSTAN The Ulba Metallurgical Plant (UMP) is situated in Ust-Kamenogorsk, in eastem Kazakhstan. In its SO-year history of continual operation, the facility has dominated the industrial base of the city through the production of processed uranium and specialty metals such as beryllium, tantalum, and niobium. The Ulba Plant was founded in 1949 to process zinc bearing monazite ores and produce thorium oxalate. The production of thorium was soon discontinued and, in January 1951, the facility started to produce hydrofluoric acid and beryllium. By 1956, commercial processing of beryl ores allowed the large-scale production of high-purity beryllium oxide. Since this time, tantalum and niobium have also been refined from local ores and regularly produced as metal powders and ceramics, along with the beryllium products. Uranium production at Ulba started in 1953 when the facility began to process uranium ore concentrates for the production of natural U3Og and UK. These processes
538 evolved to emphasize the production of low-enriched uranium during a period when large-scale applications of nuclear power were being developed by the former Soviet Union. Ulba produced significant quantities of propulsion fuel for the nuclear navy fleet of the Soviet Union and, subsequently, Russia. In 1976 the plant started to produce fuel pellets for nuclear power plants on a commercial scale. The Ulba Metallurgical Plant produced most of the fuel for nuclear reactors constructed in the USSR between 1976 and 1990. Accompanying the production of these metals is a significant amount of liquid waste residues, which have been, and continue to be generated and disposed of in several retention basins adjoining the facility. The discharge basins are located 3.2 kilometers from the Ulba River and 5.4 kilometers from the Irtysh River. The engineered containment barrier underlying one of the basins has failed and allowed accumulated liquid wastes in the basin to percolate into groundwater and pose a significant threat to nearby potable groundwater supplies in Ust-Kamenogorsk. Although this basin is no longer used, precipitated and other solid forms of the wastes remain in the basin, are entrained in accumulated rainfall and snowmelt, and continue to be discharged into the local groundwater as a persistent and lasting source of contamination. The three main water supply wells for the city of Ust-Kamenogorst are situated between 3.7 and 8.2 kilometers from the basins. The water table is between 3 and 9 meters below the bottom of the disposal basins. Contaminants suspected to have originated from the Ulba Metallurgical Plant have already been detected in nearby monitoring wells and private water supply wells near the city, and the potential for contamination of public water supplies and the Irtysh and Ulba Rivers is serious. Because they were known to be hazardous to human health and the environment, the large volumes of liquid wastes were neutralized to pH 8 and disposed as liquid slurries into a specially designed disposal-basin facility located approximately 3 kilometers north of main production yard. The uranium concentrations in the effluent do not exceed 15 milligrams/liter. The long-term efficacy of the disposal facility relies on the delicate balance between a continuous input of the slurry-based wastes from the plant and a continuous volume reduction due to the evaporation of water from the lined and impermeable storage basins on the other. In this way, solid phase wastes that settle out in the basin or accumulate as precipitates and their corresponding dissolved waste forms remain contained in the disposal facility for long periods of time, unable to percolate into groundwater, and unable to be entrained as particulates into the atmosphere. Egorov et al.4 estimate the Ulba solid waste residuals at 1,135,000 tons with an activity of 38 TBq and liquid residues at 939,000 m3with an activity of 2.3 x lo-’ TBq. A significant portion of the effluent is insoluble and precipitates stratigraphically in the basin along the bottom and adjacent to discharge points as exposed particulate “beaches” (Figure 2). Several years ago, the delicate balance between input and evaporation rates was interrupted in the case of basin “1-3” due to a significant decrease in plant production. As a result, water levels declined to the point where some of the contaminated sediments, particularly along the edge of the basin, were exposed to the atmosphere, leaving uncovered toxic “beaches” vulnerable to wind erosion and dust resuspension. More significantly, the reduced water levels led to the desiccation and partial failure (via cracking) of the clay barrier materials, which was further exacerbated by freezing conditions over one winter. The failed barrier promoted the loss of waste
539 fluids from the basin, allowing contaminants to percolate into the local water supply aquifer and move toward nearby municipal and private water wells. Although this situation was monitored and waste streams were quickly diverted into another viable basin, rain and snowmelt have continued to accumulate in the basin and percolate downwards, entraining contaminants from the sediment “beach” materials and facilitating a steady and long term source of groundwater contamination.
Figure 2. Particulate “beaches” adjoining retention ponds formed from the precipitation of liquid effluent accompanying uranium and beryllium production at the Ulba Metallurgical Plant. Remedial efforts call for the development of a conceptual and numerical model of groundwater flow and chemical transport that can be used to analyze the migration of contamination in the water supply aquifers underlying the Ulba disposal basins. The model will be used ultimately as a means to protect local groundwater quality by facilitating the design of an intervention program as well as the stabilization and control of contaminant discharges from liquid waste ponds at the plant. In addition, the model will also be used, in its initial stages of development, to determine the need for, and guide the acquisition of additional characterization and model calibration data, and later in the design of groundwater monitoring strategies. SEMIPALATINSK TEST SITE, KAZAKHSTAN The former Soviet Union conducted atmospheric and underground nuclear weapons tests at the Novaya Zemlya islands in the Russian Arctic, at the Semipalantinsk Test Site in eastern Kazakhstan, as well as peaceful nuclear explosions (PNEs) throughout its territory. The weapons program supported a Cold War program of nuclear weapons development and testing as well as, in the case of PNEs, scientific studies that included
540 seismic research, creation of underground storage cavities, and the enhanced recovery of mineral resources. The Soviet Union conducted 715 nuclear explosions from 1949 to 1990. This includes 130 explosions at Novaya Zemlya, 456 at Semipalatinsk, and 129 conducted elsewhere (primarily PNEs)*,~. The total explosive yield of all detonations conducted at Novaya Zemlya and Semipalatinsk is 265 megatons and 17.4 megatons, respectively. PNEs have a total yield of 1.6 megatons. These compare to the 200 megaton total yield from atmospheric and underground tests conducted by the United States'. While more information has been recently published on the nuclear testing program on the absolute amount of of the former Soviet U ~ ~ i o n ~, ~little ' ~ ~data " ~ exists '~ radioactivity affecting surface waters or groundwaters adjacent to these nuclear test sites. The first, and the majority (- 65%), of nuclear tests conducted by the former Soviet Union were conducted at the Semipalatinsk Test Site (STS). STS was selected as the location of the first Soviet nuclear test (a plutonium device code-named RDS-1 with a 22 kiloton total nuclear yield) in August 1949. The test site was selected in 1948 due to its desert-like setting, a large remote expanse more than 200 kilometers in diameter, and with proximity to an airfield and railhead; the site is 160 km west of the town of Semipalatinsk on tributaries of the Irtysh River. From 1949 until 1962 atmospheric tests were conducted; from 1962 to 1989 STS hosted underground tests. In total, 456 atmospheric and underground nuclear tests were detonated there; of these 70% were underground tests. Testing was confined to distinct and spatially separated experimental areas. The northern "Test Field" was used for atmospheric and ground testing of nuclear weapons. Proof-of-principal experiments were conducted there as were nuclear weapons effects studies on simulated civilian and military targets. Surface explosions conducted in 1949, 1951 and 1953 released some 8.33 x 10' TBq of 90Sr, 1.2 x lo3 TBq of 137Csand 34.8 TBq of Pu of radioactivity (decay corrected to 1994) to the environment4. Neighboring cities, including Dolon to the northeast of the test site, were exposed to large doses of radioactivity; nearly 10,000 people of a total population of 70,000 received radiation during the atmospheric testing periods from 1949 to 19632. Subsequent international radiological monitoring by the International Atomic Energy Agency in 1993 and 1994 determined that radioactivity from these atmospheric tests is currently confined to areas immediately surrounding ground zeros and no longer poses a health risk to nearby pop~lations'~. Underground nuclear testing was conducted in tunnels and adits 200m to 2 km long cut in the Degelen Mountain massif, at the bottom of 200m to 2 km deep vertical shafts, 1 meter in diameter, drilled in the Lake Balapan test area, and within auxillary vertical shafts in the Murzhik Site. 209 tests were conducted at Degelen Mountain, 105 tests were conducted at Lake Balapan, and 26 underground tests were conducted at Murzhik. Degelen is a granite rock intrusion that is characterized by geologic faults and extensive fracturing; surface water actively recharges this area and results in perched groundwater with flow rates in excess of 3000L/minute in some areas". The geologic setting of the Balapan area is equally complex with steeply dipping and faulted sediments and metasediments. Groundwater is located in areas of tectonic faulting; the depth to groundwater is between 5 to 15 m below the ground surface.
-
-
-
-
541
Containment of underground nuclear tests at the STS was inadequate. After the signing of the Limited Test Ban Treaty in 1963, standard practice called for all underground nuclear tests to incorporate measures to prevent the release of radionuclides to the atmosphere. Containment required the nuclear explosion to be conducted in rock with sufficient strength and spacing between tests that it would not mechanically fail due to the force of the detonation. In addition, the tunnels or boreholes were further sealed with backfill and grouting materialsI0. However, only 50% of Soviet underground tests qualified as '111 camouflet explosions' where radioactivity was fully contained underground. 45% of the explosions were 'partial camouflet explosions' where there was some leakage of radioactive noble gases (e.g., l3lrnXet = 11.9 days; 133mXe t 112 = 2.2 days; 133gXe t112 = 5.2 days; 135gXe t 112 = 9.1 hours; 37Art 112 =i 35.0 days) from ground zero to the atmosphere (Figure 3). Thirteen tests at STS were 'partial camouflet explosions' with non-standard radiation releases to the environment. These containment accidents deviated substantiallyfrom standard testing practice and resulted in radiological exposures to neighboring human populations in excess of maximum permissible concentrations.
Figure 3. Schematic of gas release and venting through a geologic fault from a nuclear test conducted in a tunnel at Degelen Mountain. 1 = tunnel; 2 = zero room; 3 = damaged rock radius; 4 = surface spa11 zone; 5 geologic faults; 6 and 7 containment stemming; 8 = radius of gas transport; 9 = gas flow through tectonic fault". The International Atomic Energy Agency has determined that surface quickl contamination from atmospheric nuclear test contamination (- 185 GBq h2) falls back to ground in a radial direction away from the center of these explosions' Y. However, the radiological effects of nuclear testing are yet to be fully understood. Recent reports indicate the presence of large uantities of unsecured plutonium metal available on the surface of parts of the test sitj2. The health and proliferation threat is only compounded as the local nomadic population of eastern Kazakhstan repopulates the lands of the SemipalatinskTest Sites and reverts to traditional livelihoods of grazing and agriculture. In addition proven mineral reserves of Cr, Cu, Pb, W, Mo and Au exist in more than 30 mapped ore deposits. Coal mines within the borders of the STS are also actively being mined. Due to the poor record of containment the potential for contamination of groundwater and the ensuing risk to down-gradient receptors remains high. At Degelen Mountain, nuclear testing resulted in severe structural damage to the rock itself. Twenty
542 seven tunnels are discharging water and 24 tunnel entrances (out of 127 adits) are contaminated by measurable levels of 90Sr, 137Csand 239Pu. Deterioration of the aging tunnel workings has only hastened the migration of radionuclides. At the Balapan test area, the release of gaseous radionuclides due to venting was widespread; like Degelen, the force of the explosion has weakened the structural integrity of the rock surrounding the explosions and has increased the likelihood of radionuclide migration in groundwater. Methane present in some boreholes due to the breakdown of organic-rich shales and coals has also resulted in spontaneous combustion and burning of some shafts". The maximum extent of groundwater contamination requires further study; however, ambient groundwater velocities are enhanced by the permeability afforded both by tectonic and test-induced fracturing. Concentrations of tritium, 90Sr,and 137Cshave been measured in groundwaters of the Degelen and Balapan testing areas. At the Balapan test area groundwaters produced from an unused borehole (no. 1419) have a tritium concentration of 1.4 x lo6 Bq/L and a 90Srconcentration of 2.0 x lo3 Bq/L; the nearest nuclear test was 1 kilometer distant. Clearly, groundwater is currently mobilizing radionuclides, but the nature, extent, and velocity of the transport is unknown and requires comprehensive investigation. For these reasons, scientists from the National Nuclear Center of Kazakhstan, the Russian Academy of Sciences and the U.S. defense programs national laboratories (with support from U S . Government) have initiated a collaboration to address the problem of the extent of groundwater contamination from the underground nuclear tests conducted at the STS. These efforts incorporate a combined approach that relies on field and laboratory investigations to retum data on the extent of radiochemical contamination of groundwater. In turn this data will be used to construct hydrologic flow and coupled contaminant transport models that can be used to assess and manage the present and future spread of contamination as well as effectively plan for long-term radiological monitoring to best protect human health and the environment. These methods have proven successful to address the migration of radionuclides in groundwater, and dose to potential down-gradient receptors, at sites of underground nuclear tests conducted by the United S t a t e ~ ' ~ . CONCLUSIONS
In describing several case studies of radioactive contamination in states of the former Soviet Union, the role of the international community to address these problems cannot be underestimated. The residual contamination described here is daunting, affects large numbers of people, crosses political borders, is hydrochemically complex, as well as requires critical strategies (and technologies) for effective long-term solutions. For these reasons, cooperative approaches using science and technology provide common tools that combine the capabilities of military, academic, ministerial, private organization, and other partners'. The long-term viability of emerging post-Soviet governments hinges on their ability to effectively solve legacy environmental problems and best protect their citizens by promoting responsible environmental and economic practices. As such, this is also very much a national security issue. Due to the new access afforded to the territories of the former Soviet Union by the end of the Cold War, as well as organizations and fimding to promote international partnerships, the many threats from radionuclide
543 contamination within the former Soviet Union can now be fully evaluated and potentially mitigated. ACKNOWLEDGEMENTS This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National Laboratory under Contract No. W-7405-Eng-48. REFERENCES Tompson, A.F.B., Richardson, J.H., Ragaini, R.C., Knapp, R.B., Rosenberg, N.D., Smith, D.K., and Ball, D.Y., 2002, Science and technology to advance regional security in the Middle East and Central Asia, Lawrence Livermore National Laboratory, UCRL-JC-150576, 17p. Bradley, D.J., 1997, Behind the nuclear curtain: radioactive waste management in 2. the former Soviet Union, (D.R. Payson, ed.), Battelle Press, 716p. Bradley, D.J., Frank, C.W., Mikerin, Y, 1996, Nuclear contamination from 3. weapons complexes in the former Soviet Union and the United States, Physics Today, v. 49, p. 40-45. Egorov, N.N., Novikov, V.M., Parker, F.L., Popov, V.K. (eds.), 2000, The radiation 4. legacy of the Soviet nuclear complex, London: Earthscan Publications Ltd., 236p. Buckley, P.B., Ranville, J., Honeymaq B.D., Smith, D.K., Rosenberg, N. and 5. Knapp, R.B., 2003, Progress toward remediation of uranium tailings in MailuuSuu, Kyrgyzstan. In Proceedings of Tailings and Mine Waste '03, Vail, Colorado, 12-15 October, 2003. Rotterdam: Balkema. Knapp, R.B., Richardson, J.H., Rosenberg, N., Smith, D.K., Tompson, A.F.B., 6. Saranogoev, A., Duisebayev, B., Janecky, D., 2002, Radioactive tailings issues in Kyrgyzstan and Kazakhstan. In Proceedings of Tailings and Mine Waste '02, Fort Collins, Colorado, 27-30 January, 2002. Rotterdam: Balkema. 7 U.S. Army Center for Health Promotion and Preventive Medicine (USACHPPM), 1999, radiological sources of potential exposure andor contamination, TG-238, 285p. 8. United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), 1998, Exposures from man-made sources of radiation, 47th Session of UNSCEAR, 130p. Tsaturov, Y.S., Matushchenko, A.M., Dubasov, Y., Krasilov, G.A., Logachev, 9. B.A., Maltsev, A.L., Safronov, V.G., Filippovski, V.I., Smagulov, S.G., 1998, Semipalatinsk and northern test sites in the USSR integrated program of radiation and ecological studies on environmental consequences of nuclear tests, in Atmospheric Nuclear Tests: Environmental and Human Consequences, (C.S. Shapiro, ed.) Springer-Verlag, p. 199-218. 10. Adushkin, V.V. and Leith, W., 2001, The containment of Soviet underground nuclear explosions, U S . Geological Survey, Open File Report 01-312, 52p. 11. Shkolnik, V.S. (ed.), 2002, The Semipalatinsk Test Site: creation, operation, and conversion, Sandia National Laboratories, SAND 2002-3612P, 396p. 1.
544
12. Stone, R., 2003, Plutonium fields forever, Science, v. 300, p. 1220-1224. 13. International Atomic Energy Agency (IAEA), 1998, Radiological conditions at the Semipalatinsk Test Site, Kazakhstan: preliminary assessment and recommendations for further study, International Atomic Energy Agency STVPUB/1063, 43p. 14. Tompson, A.F.B., Bruton, C.J., Pawloski, G.A. (eds.), 1999, Evaluation of the hydrologic source term from underground nuclear tests in Frenchman Flat at the Nevada Test Site: the CAMBRIC test, Lawrence Livermore National Laboratory, UCRL-ID-132300,3 19p.
CONTAMINATION AND VULNERABILITY OF GROUNDWATER RESOURCES IN RUSSIA DR. IGOR S. ZEKTSER, PROF. Water Problem Institute, Russian Academy of Sciences, Moscow. Russia
In the last few years in Russia, large-scale research programmes have been carried out to allow the regional quantitative assessment of natural groundwater resources. The hydrogeological natural and safe yield groundwater resources are estimated. Natural resources (dynamic resources) characterize a value of groundwater recharge by infiltrated atmospheric precipitation, river runoff and leakage from other aquifers, totally expressed by flow intensity value or water table thickness, inflowing to groundwater level. Therefore, natural resources are the factors indicating groundwater spreading, showing its main mineral ability to recur. Groundwater natural resources are an upper border that defines recharge of constant wells with nolimit exploitation terms (excluding wells with debits forming with the help of additional resources used during exploitation). Under regional estimation, groundwater is mainly expressed by annual average and minimal values of groundwater modules (litres per second per 1 h2). Regional estimation of groundwater safe yield is made by determining the volume of groundwater withdrawal from the aquifer, provided that groundwater level decline by the end of exploitation does not exceed a specified value (determined in advance, basing the data on water-bearing-later parameters), and water quality must satisfy certain standards. Under regional estimation, calculation of both potential and predicted potential reserves is usually made. What is the difference between them? Potential exploited reserves characterize maximum possible groundwater withdrawal out of the aquifer, and predicted resources point at possible groundwater use under a certain location of consumers or considering a concrete water demand. Here, a regional assessment of predicted potential reserves is made either for conditional wellfield location, or (if it is known) taking into account a scheme of particular water consumers’ location and water demand. Lately, considerable work has been done in Russia on regional assessment of groundwater potential reserves of artesian basins. Maps on different scales with potential reserves moduli have been compiled. Module of potential reserves means water yield that can be obtained fiom 1 sq. km of the aquifer area. Under regional estimation of potential reserves for separate prospective regions, water demand of concrete water consumers and possible location of future well fields were taken into consideration. As a result, for most hydrogeological regions of the country, a principal possibility for groundwater use has been revealed and a base has been made for planning prospecting and exploration works for water supply of concrete objects. However, it should be noted that a decision for designing and drilling groundwater well fields is taken up, not based on the results of regional assessment of its natural or potential reserves but only after conducting special works with an obligatory approval of groundwater safe yield by a state or territorial commission for mineral resources. As a result of regional natural groundwater resources estimation, groundwater runoff maps of different scales were made. They show values of groundwater flow area, the share of groundwater in total river runoff and total water rate on 1 h2 balance in different nature and domestic conditions. Important work is done for
545
546 regional estimation of all groundwater exploitation resources of artesian basins and country hydrological massifs. These research programmes allowed present and prospective natural groundwater population supply of different Russian Federation constituentsto be identified. During the last few years in Russia, groundwater gravity increase is observed in the general balance of drinking water supply sources. At the moment the share of groundwater in the country’s water supply for the population i s on average 46% (41% for municipal and 83% for agricultural water supply). However, the drinking water supplies of many cities, especially large ones, including Moscow and Saint Petersburg, are based on surface water unprotected from pollution, and part of these cities have no dissolved ground sources of water supply. In Russia, since 1979 (earlier - on the USSR territory), the data of operational stocks of groundwater, their use and quality (it is part of the State monitoring of the country’s underground conditions) are annually generalized within the framework of State water cadastre. The received data are published in annual brochures. The data given below are taken from a brochure published in 2002, made by Gostsentr “Geomonitoring”. The total value of forecasting natural and salt groundwater resources with mineralization to 3 s/l is estimated at 870 mln.m3/day.However, the total quantity of proved exploiting groundwater resources used for drinking and technical water supply, irrigation and stock-water development in country average is not over lo%, and for the first January 2002 is about 89.4 mln.m3/day. Development of groundwater in 2000, including mine and open pit pumping, was 33.1 mln m3/day. However approximately 27.2 mln m3/day were used (82% of the extracted water), the others, mainly taken out of mines and pits, were dumped without use. From the total quantity of groundwater withdrawn, about 76% is used for drinking water supply, 22% for technological water supply and 2% - for the irrigation of crops and pastures. On average, one person uses 188 litres of groundwater per day, including 145 litres per day for drinking water supply. There is a valid Water Law in Russia that regulates the use of the country’s water resources, including groundwater. According to this law, natural groundwater is of high priority when making decisions about the population’s water supply. Natural groundwater can be used mainly for drinking water supply. The use of groundwater for other purposes not connected with drinking water supply (for example, industry or irrigation), is only permitted in cases where there is a sufficient quantity of groundwater to supply both present and future needs for the drinking water supply. It can also be allowed under a special license issued by the Environmental Body. At the present time, more than 60% of cities and towns in the Russian Federation have groundwater sources of water supply. Groundwater is the main source of water supply in small and average towns, however, in some regions, its function in the domestic and potable water supply of large cities, including those with population exceeding I million people, is very limited. Let us discuss in more detail a pattern of public water supply in the largest cities of Russia (with a population of more than 250,000) as noted in Table 1. The water supply of 34 of the 77 cities is predominantly based on surface water (more than go%), and 24 cities meet their water demands mainly by groundwater (more than 90%).
547
table 1 water supply of cities and towsn with different Less than 50th~ Mainly groundwater (more 74 than 90 %) Mainly surface water (Over 15 90 %) 11 Ground and surface water Water-supply sources
51lOOths
101251501th~ Over 250th~ 500th~ -1mln lmln
57
46
37
3
0
21
24
37
39
82
22
30
26
28
18
Groundwater pollution, occurring in recent decades in many regions, is a serious hazard, essentially limiting the possibilities and perspectives of groundwater use for a potable water supply. An increase in the concentration of compounds of nitrogen, iron, manganese, strontium, selenium, arsenic, fluorine, beryllium and organic matter is most often observed in groundwater and makes it useless for drinking purposes without special treatment. The reason that groundwater contamination is a threat to potable and domestic water supplies is, in many cases, petroleum-products leakage from gasoline tanks and pipelines. For example, in the territory of Russia, more than 100 sources of groundwater contamination have been found [mainly sulfates, chlorides, nitrogen compounds (nitrates, ammonia, ammonium), petroleum, phenols, iron compounds, heavy metals (copper, zinc, lead, cadmium, and mercury)]. Areas of groundwater pollution extend, in some cases, over tens and even hundreds of square kilometres. Groundwater pollution in these operating well fields is most hazardous. At present, groundwater pollution is found in about 140 well fields that supply 87 Russian towns with water. Industrial plants are the main source of groundwater contamination, amounting to 42% of all the contaminated sites. This is followed by waste accumulators and filtration fields, wastewater irrigation from cattle-breeding farms and filtration from the agricultural use of pesticides, manures, and fertilizers (20%). Fourteen percent of the sites are contaminated with wastewater and public service wastes. Non-standard groundwater also serves as a source of contamination because of its leakage to the well fields when the production is disturbed. In addition to groundwater pollution in separate wells and well fields, regional groundwater pollution occurs. Regional changes of groundwater composition and properties are usually caused by both point and areal pollution sources. Close interaction of groundwater with the environment and other components has been especially well demonstrated, as was obvious in recent decades, when the impact of man-induced factors on the environment progressed greatly. For example, urban impact on groundwater quality on a regional scale is the most intensive. This results fi-om increased mineralization of precipitation in urban areas and “acid” rain, oil-products leakage, and the impact of industrial and sewage waste. Groundwater salinity in urban territories is usually 2-3 times higher than in rural areas. The impact of acid rain is the second example. It is known that atmospheric emission of chemicals doubles every 10 years, which results in an increase in their concentration in atmospheric precipitation. Under the heading of infiltration of
548 atmospheric precipitation and snow-melt water, lie many different elements that change the groundwater’s hydro chemical regime, its composition and quality. The most tragic example is associated with the impact of atomic power stations. The Chernobyl disaster’s impact on groundwater was observed at a great distance from the site of the catastrophe. Considerable radionuclide accumulations were formed in the upper soil level in the Chernobyl area. Radionuclide migration through the vadose zone resulted in the growth of their concentration by several tens and even hundreds of times and even at large depth (up to 100 m), as compared with the situation before the failure. There are many similar cases. All these examples show that regional environmental pollution results in regional groundwater pollution. This makes it clear that problems of groundwater protection from contamination are closely related to a general problem of environmental protection from contamination.
15. AIDS VACCINE STRATEGIES AND ETHICS IN INFECTIOUS DISEASES WORKSHOP
This page intentionally left blank
JOINT WORKING GROUP REPORT OF AIDS AND INFECTIOUS DISEASES PMP AND MOTHER AND CHILD HEALTH PMP* 2003 ETHICAL ISSUES IN AIDS-HIV EPIDEMICS
GUY DE THE Institut Pasteur, Paris, France NATHALIE CHARPAK Instituto Materno Infantil, Bogoti, Colombia Other Panel Members: R. Anderson, F. Buonaguro, I. Franca Jr., J. Hinkula, J. Hutton, U. Schuklenk, W.A. Sprigg, R. Thorstensson, E.Vardas, I. Warren, R. Zettenstrom INTRODUCTION The World Federation of Scientists held a workshop on mother to child transmission of HIV (reference) in 2001. Control of HIV by both ARV treatment and vaccine development has progressed, but the epidemic still expanded with 800,000 babies infected in 2002. More on the currentfigures of the epidemics (references). This article presents the common views of a diverse group of scientists, with different training, experience and interests, on the key issues to be addressed concerning HIV vaccine research and antiretroviral therapy in both developed and developing countries. These issues can logically be considered on a "temporal" scale: short-term and long-term issues. Each scale has individual and community aspects. However, there is no dichotomy of individual versus community, vaccine versus treatment, or even short-term versus long-term. All issues are inter-related. The earliest legislation on the protection of subjects of medical research is that of Germany in 1900. In 1947 the Nuremberg Code was published. Informed, voluntary consent is the first of its ten points. Scientific rigour, careful and continuing evaluation of benefits and risks are also required. The World Medical Association made the Declaration of Helsinki in 1964. Many versions of this extended document have followed. There are few, if any, new ethical principles; the additions address implementation and medical responsibilities. By 1981, the WHO - CIOMS (Council of the International Organization of Medical Sciences) published guidelines for application of the Declaration of Helsinki in different cultures, with particular concern for developing countries. These medical codes focus on people as patients or subjects. In contrast, the International Statistical Institute's Declaration of Professional Ethics (1985) states obligations to four groups, all of whom must be considered: society, subjects, colleagues and employers or funders. Statisticians must be familiar with the codes of ethics of those with whom they work. Current work on a revision of this declaration includes making explicit the collective professional responsibility to comment publicly on errors of omission as well as commission. The Royal Statistical Society Code of Conduct (1993) requires Fellows to address human rights, consequences of ignoring statisticaljudgements and the needs of fellow members. In many areas of public health, a genuine tension exists between the wishes of the individual (a freedom of action of the individual), and the well being of the
.
551
552 community in which an individual lives. Individual behaviours may affect the wellbeing of the community as a whole. In the past year we have witnessed a very important example: quarantine for suspected SARS patients, applied voluntary in many countries, but imposed in others. In the context of HIV/AIDS, the issue emerges in many areas, including: contact tracing of infected people to reduce secondary or tertiary transmission events, the need for anonymous HIV screening, care of people and their partners who were not informed of their serostatus, and the need to counsel those participants in an HIV vaccine trial to ensure that vaccination (or administration of a placebo) does not encourage unprotected sexual contact which concomitantly generates secondary infections. The resolution of any such tension must be considered within the prevailing political and social context of any given society. Solutions will depend on these prevailing conditions. NEW ETHICAL ISSUES AIDS Vaccines trials In vaccine research and development, only a small number of vaccine preparations are being tested in Phase I and I1 clinical trials. There are very few Phase I11 clinical trials. As we do not know which preparation will be effective for prevention and therapy, we urge the European Community to finance the testing of several existing vaccine preparations to speed up the emergence of a good candidate. The effort of the NIH and of the Global Fund must be complemented by a major EC effort. Cooperation between funding agencies or institutions must be achieved to allow clinical vaccine trials. Study trials must include education and technological transfer to the clinical trial sites in the developing countries to avoid neocolonialism. Work on outcome measures for vaccine and therapies is urgent, as specific endpoints are required to compare different vaccine candidates and regimens tested in different studies. International agreement on which outcomes must be used for evaluation is necessary, because meta-analysis of publications with individually selected results can be seriously misleading or wrong and hence unethical (Hutton and Williamson, J. Roy Stat SOC2000). Clinical endpoints could range from prevention of the infection to disease prevention or maintenance of the low virus titer during structured ARV interruptions. An ethical issue that has been highlighted in phase VII HIV vaccine trials that are being planned in developing countries is that of intercurrent HIV infections of participants. Participants will receive compensation for “trial-related injuries”, usually for the duration of the trial, and in selected cases of severe side effects for longer periods of time. For example, in South Africa there is currently (August 2003) no government antiretroviral policy in the public sector. Agreements between sponsors, researchers and regulatory authorities in South Africa have been reached. This policy accepts that sponsors will provide funding for a trust fund to be established. Money from the trust fund would provide treatment of individuals becoming infected during phase VII HIV vaccine trials once they become ill with AIDS (using CDC criteria for the initiation of therapy). The debate is therefore focused around when to give treatment and also for how long and how this will be administered (i.e. by the researchers or physicians outside the research environment). This policy that the ethical prerogative for these intercurrent infections during phase I/II HIV vaccine trials is placed purely on the sponsors/researchers. This occurs even though these types of trials are purely safety trials with candidate vaccines that have no proven
553 efficacy and this is clearly explained during the informed consent process. (Eftyhia must give some clarifications and a shorter version.) Experimental vaccines or antiretroviral drugs must pass the standard FDA (or equivalent) regulations for medical products, which require testing on small animals. This implies a right of access to experimental animals. Each vaccine candidate under consideration should be justified by its endpoint use, to improve human health. Analyses aimed at improving the quality of therapeutic or preventive agents in animal models such as mice or primates must be performed under the highest ethical standards. Legislation on the care of experimental animals must be evaluated by exploring the restrictions imposed on vaccine and treatment development and by explicitly assessing alternative opportunities for gaining knowledge. AIDSLHIV related stigma exists at all levels of prevention and treatment, and affects recruitment and follow-up of study participants. It must be vigorously opposed so that vaccines and therapies can be correctly and effectively developed. Experience with Phase I trial participants in the USA and South Africa highlights the diverse responses in different communities. In the USA, many HIV vaccine trial participants found it difficult to reveal their participation in these trials to their friends and colleagues, and chose to maintain confidentiality. However, in South Africa trial participants feel that by publicizing their participation in phase I vaccine trials they can convince their communities of the good they are doing in the face of a devastating epidemic which decimates their communities. In South Africa these participants are seen as heroes doing significant and good work for the HIV epidemic. Differences in stigma should be addressed in informed consent procedures, as they vary with cultural diversity and social habits. Various cultural approaches to informed consent are evolving as the number of international collaborative projects increases. Particular consideration must be given to implementing informed consent with people whose literacy is limited. Some vaccine trials are community-based. The associated ethical issues are discussed in WHO CIOMS (1993) and in the UK Medical Research Council guidelines for clustered randomised studies (2002). Access to anti-retroviral treatment We must remember that HIV/AIDS treatment is a lifetime commitment for the patient and society. Lessons learned from the Brazilian experience of 6 years treatment showed that a developing country can reverse the AIDS crisis. We support access to local production in order to provide full treatment for everybody in each country affected by the HIV/AIDS epidemics. The moral obligation of ensuring access to anti-HIV drugs for deprived populations should be linked to the need to ensure mechanisms to monitor HIV drug resistance. International and regional cooperation is needed to strength the capacity to produce drugs. If a country cannot produce drugs or vaccines, nonprofit production for regional distribution should be allowed. We recognize the need to control exports of drugs between regions. As anti-viral treatment of pregnant women leads to a significant reduction of mother-child HIV transmission, we feel that the MTCT HIV must be prevented by the pursuit of ART in the mother and the follow-up of the children. One issue has to be urgently, scientifically, evaluated is the risk of breastfeeding when the mother is treated adequately. A further MTCT issue is the fight against the stigma of nonbreastfeeding of mothers. International collaborative projects helped that.
554
International cooperation There is an increasing and urgent need to combine international scientific and financial support with commitment by local governments to efficiently combat HlV/AIDS epidemics. Building health sciences capacities at national and regional levels needs new impetus. This implies not only increased funding, but also the personal involvement of the best scientists and clinicians by entering collaborative projects with colleagues om developing countries. We urge research and university organizations to recognize the value of such collaboration for the careers of the people involved.
CONCLUSIONS We recognize that the issues facing low, middle and high-income countries are somewhat different, and that decisions for implementation are influenced by their cultural patterns. Therefore, although there are universal human rights, including that of having access to existing HIV drugs, the implementation of ART requires building up the health infrastructure. The needs are different among low, middle and highincomes countries. As scientists we urge governments and aid organisations to recognise the implications of fulfilling commitments to highly indebted nations, of the provision of universal access to education and of the World Trade Organisation rules on goods and services for the spread of HIV and AIDS. Poverty accelerates epidemics. AIDSMIV related stigma and discrimination is a scientific and ethical issue, and should be addressed in study designs, because of its impact on methodology and results. A loss of even 10% of patients due to fear of stigma can substantially affect conclusions of scientific studies. Statistical methods of assessing the impact of loss to follow up should be exploited. Expert statistical involvement in the design of surveys and experiments is a moral requirement. Studies that are incorrect cannot be ethically acceptable. The scientific community needs to be aware of their vulnerability in their relationships with politicians, governments and the media. We recognized each individual’s responsibility as a scientist to avoid statements or expressing views that have major adverse public health impacts outside of the scientific community. Universities, research institutes and professional societies should recognize their obligation to educate the public about scientific issues and derived governmental policies. This obligation is especially acute with respect to controversial issues that can endanger public health and safety.
SEMINAR PARTICIPANTS
This page intentionally left blank
SEMINAR PARTICIPANTS
Dr. Hussein A1 Shahristani
University of Surrey Guilford, UK
Professor Roy Anderson
Infectious Diseases Epidemiology Imperial College Faculty of Medicine London, UK
Dr. Giuseppe Tito Aronica
Hydrological Works Universita di Messina Messina, Italy
Dr. Scott Atran
Institut Jean Nicod CNRS Paris, France
Professor William A. Barletta
Accelerator & Fusion Research Division Lawrence Berkeley National Laboratory Berkeley, USA
Dr. Paul Bartel
US Geological Survey Washington, USA
Professor Benfratello
University of Palermo Palermo, Italy
H. E. Dr. Guido Bertolaso
Italian Civil Protection Rome, Italy
Professor J. M. Borthagaray
Instituto Superior de Urbanism0 University of Buenos Aires Buenos Aires, Argentina
557
558 Professor Enzo Boschi
National Institute for Geophysics and Vulcanology Rome, Italy
Dr. Vladimir B. Britkov
Information Systems Laboratory Institute for Systems Analysis Moscow, Russia
Dr. Franco Buonaguro
Fondazione Pascale Istituto Nazionale dei Tumori Naples, Italy
Dr. Diego Buriot
World Health Organisation CSR Office Lyon, France
H. E. Professor Rocco Buttiglione
Ministry of E. U. Affairs Rome, Italy
Dr. Gina M. Calderone
EA Science and Technology New York, USA
Dr. Paolo Capizzi
National Meteorological Service Rome, Italy
Dr. John P. Casciano
Enterprise Security Group Reston, USA
Professor Joseph Chahoud
Physics Department Bologna University Bologna, Italy
Dr. Nathalie Charpak
Instituto Materno Infantil Bogota, Colombia
559
Professor Robert Clark
Hydrology and Water Resources University of Arizona Tucson, USA
Dr. James H. Clarke
Civil and Environmental Engineering Vanderbilt University Nashville, USA
Dr. Massimo COCCO
National Institute for Geophysics and Vulcanology Rome, Italy
Sir Alan Cook
The Royal Society London, UK
Professor Carmelo Dazzi
Herbaceous Cultivation and Pedology University of Palermo Palermo, Italy
Professor Guy de TM
Epidemiology of Oncogenic Viruses Institut Pasteur Paris, France
Dr. Carmen Difiglio
Energy Technology Policy Division International Energy Agency Paris, France
Dr. Mbareck Diop
Science & Technology Advisor Dakar, Senegal
Dr. Allan Duncan
NIREX Waste Management Advisory Committee Oxon, UK
560
Profesor Christopher D. Ellis
Landscape Architecture & Urban Planning Texas A&M University College Station, USA
Dr. Lome Everett
Stone & Webster Management Consultants, Inc. Santa Barbara, USA
Dr. Ivan Franqa Junior
Public Health Faculty University of SBo Paulo Sgo Paulo, Brazil
Professor William Fulkerson
Joint Institute for Energy and Environment University of Tennessee Lenoir City, USA
Professor Andrei Gagarinslu
PRC “Kurchatof Institute” Moscow, Russia
Dr. Bertil Galland
Writer and Historian B u y , France
Dr. Richard Garwin
Thomas J. Watson Research Center IBM Research Division
Yorktown Heights, USA Dr. Gebhard Geiger
Applied Economics Technische Universitat Miinchen Munich, Germany
H. E. Dr. Carlo Giovanardi
Ministry of Parliamentary Affairs Rome, Italy
561
Professor Alberto GonzAlezPozo
Theory and Analysis Universidad Aut6noma Metropolitana Xochimilco, Mexico
Professor Louis J. Guillette, Jr.
U.F. Research Foundation University of Florida Gainesville, USA
Dr. Balamurugan Gurusamy
The Institute of Engineers Kuala Lumpur, Malaysia
Dr. Munther J. Haddadin
Ministry of Water & Irrigation of the Hashemite Kingdom of Jordan Amman, Jordan
Professor Jorma Hinkula
Karolinska Institute & Swedish Institute for Infectious Disease Control Stockholm, Sweden
Ms. Elizabeth K. Hocking
Argonne National Laboratory Washington, USA
Professor Reiner K. Huber
Faculty of Informatics Universitat der Bundeswehr Miinchen Neubiberg, Germany
Dr. Jane L. Hutton
Department of Statistics The University of Warwick Coventry, UK
Dr. Ahmad Kamal
Ambassador (ret.) U. N. Institute for Training and Research New York, USA
562 Dr. Ibrahim Karawan
Middle East Center University of Utah Salt Lake City, USA
Professor W. A. Kastenberg
Department of Nuclear Engineering University of California Berkeley, USA
Dr. Tomio Kawata
Office for Policy Planning & Administration Japan Nuclear Cycle Development Institute Ibaraki,Japan
Dr. Hisham Khatib
Honorary Vice Chairman World Energy Council Amman, Jordan
Dr. Stephen J. Kowall
National Engineering and Environmental Laboratory Idaho Falls, USA
Professor Victor A. Kremenyuk
Institute of USA Studies Russian Academy of Sciences Moscow, Russia
Dr. Andrei Krutskih
Department of Science and Technology Russian Foreign Ministry Moscow, Russia
Professor Valery Kukhar
Institute for Bio-organic Chemistry Academy of Sciences Kiev, Ukraine
Professor Tsung-Dao Lee
Department of Physics Columbia University New York, USA
563
Professor Axel Lehmann
Institute for Technical Informatics Universiat der Bundeswehr Miinchen Neubiberg, Germany
Dr. Arthur H. Lemer-Lam
Lamont-Doherty Earth Observatory Columbia University New York, USA
Dr. Genevieve Lester
International Institute for Strategic Studies -US. Washington, USA
Dr. Mark D. Levine
Lawrence Berkeley National Laboratory Environmental Energy Technologies Berkeley, USA
Dr. Luca Malagnini
National Institute for Geophysics and Vulcanology Rome, Italy
Professor Michael E. Mann
Department of Environmental Sciences University of Virginia Charlottesville, USA
Professor Sergio Martellucci
Physics and Energy Science &Technology Universitzl degli Studi di Roma “Tor Vergata” Rome, Italy
H. E. Professor Antonio Marzano
Ministry of Productive Activities Rome, Italy
Professor Farhang Mehr
University of Boston Boston, USA
,564 Dr. Anton Micallef
Euro-Mediterranean Centre on Insular Coastal Dynamics Valletta, Malta
Dr. Akira Miyahara
National Institute for Fusion Science Tokyo, Japan
Dr. Andrea Morelli
National Institute for Geophysics and Vulcanology Rome, Italy
Professor Wael Mualla
Water Engineering University of Damascus Damascus, Syria
Commander W. Muller-Seedorf
Center for Analyses and Studies German Armed Forces Waldbrol, Germany
Dr. Amador Muriel
World Laboratory Centre for Fluid Dynamics Makati, The Philippines
Professor John Peterson Myers
Environmental Health Sciences Crozet, USA
Mr. Ken Nash
Nuclear Waste Management Division Ontario Power Generation Toronto, Canada
Dr. Slobodan Nickovic
Euro-Mediterranean Centre on Insular Coastal Dynamics Valletta, Malta
565
Dr. Jef Ongena
Ecole Royale Militaire Plasma Physics Laboratory Brussels, Belgium
Professor Carlos Ordofiez
Physics Department University of Houston Houston, USA
Professor Guy Ourisson
Neurochemistry Centre AcadCmie des Sciences Strasbourg, France
Professor Donato Palumbo
World Laboratory Centre Fusion Training Programme Palenno, Italy
Dr. David E. Parker
Meteorological Office Centre for Climate Prediction & Research Berkshire, UK
Professor Stefan0 Parmigiani
Evolutional and Functional Biology University of Parma Parma, Italy
Professor Margaret Petersen
Hydrology & Water Resources University of Arizona Tucson, USA
Professor A. Townsend Peterson
Ecology and Evolutional Biology University of Kansas Lawrence, USA
Professor Juras Pozela
Lithuanian Academy of Sciences Vilnius, Lithuania
566
Professor Richard Ragaini
Department of Environmental Protection Lawrence Livermore National Laboratory Livermore, USA
Professor Vittorio Ragaini
Chemical Physics and Electro-Chemistry University of Milano Milan, Italy
Professor Salvatore Raimondi
Herbaceous Cultivation and Pedology University of Palermo Palermo, Italy
Professor Karl Rebane
Department of Physics University of Tallinn Tallinn, Estonia
Professor Norman Rosenberg
Joint Global Change Research Institute Baltimore. USA
Dr. Arthur H. Rosenfeld
California Energy Commission Sacramento, USA
Professor Giuseppe Rossi
Civil and Environmental Engineering University of Catania Catania, Italy
Dr. Luca Rossi
Hydraulic Emergencies Italian Civil Protection Agency Rome, Italy
Professor Zenonas Rudzikas
Theoretical Physics & Astronomy Institute Lithuanian Academy of Sciences Vilnius, Lithuania
567
Dr. A. Rybalchenko
FGUP VNIPI Promtechnologij Moscow, Russia
Professor Ilkay Salihoglu
Institute of Marine Sciences Middle East Technical University Icel, Turkey
Dr. Maher Salman
Irrigation Water Management IPRID/AGL FA0 Rome, Italy
H. E. Msgr. Marcel0 Shchez Sorondo
Bishop-Chancellor Pontificia Academia Scientiarum Rome, The Vatican
Dr. Benjamin Santer
Climate Model Diagnosis & Intercomparison Lawrence Livermore National Laboratory Livennore, USA
Professor Mario Santoro
Hydraulics Engineering and Environmental Applications University of Palermo Palermo, Italy
Dr. Jean B. Savy
International Institute for Strategic Studies Lawrence Livermore National Laboratory Livermore, USA
Professor Hiltmar Schubert
Fraunhofer Institute for Chemical Technology Pfinztal, Germany
Professor Udo Schuklenk
Bioethics Division University of the Witwatersrand Johannesburg, South Africa
568 Dr. Leonard0 Seeber
Lamont-Doherty Earth Observatory Columbia University, New York, USA
Professor Gerald0 Gomes Serra
NUTAU University of Sao Paolo S b Paulo, Brazil
Professor William R. Shea
History of Science University of Padova Padova, Italy
Dr. Uri Shavit
Civil and Environmental Engineering Technion Israel Institute of Technology Haifa, Israel
Professor Nir J. Shaviv
Racah Institute of Physics Hebrew University of Jerusalem Jerusalem, Israel
Professor K.C Sivaramakrishnan
Centre for Policy Research New Dehli, India
Dr. David K. Smith
Science & Technology Dept. Lawrence Livermore National Laboratory Livermore, USA
Dr. Shaul Sorek
Environmental Hydrology and Microbiology Jacob Blaustein Institute for Desert Research Sde Boker Campus, Israel
Professor William A. Sprigg
Institute for the Study of Planet Earth University of Arizona Tucson, USA
569 Dr. Bruce Stram
WFS Energy Permanent Monitoring Panel World Federation of Scientists Houston, USA
Professor Michael Stunner
Friedrich-Alexander-UniversitSit Erlangen-Nurnberg,Germany
Dr. Shanna Helen Swan
Family and Community Medicine School of Medicine, University of Missouri Columbia, USA
Professor Kamran Talattof
Department of Near Eastern Studies University o f Arizona Tucson, USA
Dr. G. Gray Tappan
US Geological Surveys International Programs, EROS Data Center Sioux Falls, USA
Dr. Terence Taylor
International Institute for Strategic Studies - U.S. Washington, USA
Professor Rigmor Thorstensson
Immunology and Vaccinology Dept. Swedish Institute for Infectious Diseases Control Solna, Sweden
Dr. Larry Tieszen
US Geological Surveys International Programs, EROS Data Center Sioux Falls, USA
Dr. Bob van der Zwaan
Energy Research Centre of the Netherlands ECN - Policy Studies Amsterdam, The Netherlands
570
Dr. Eftyhia Vardas
Perinatal HIV Research Unit University of Witwatersrand Johannesburg, South Africa
Dr. Eileen Vergino
Center for Global Security Research Lawrence Livermore National Laboratory Livermore, USA
Dr. Frederick vom Saal
Division of Biological Sciences University of Missouri Columbia, USA
Professor Franqois Waelbroeck
World Laboratory Centre Fusion Training Programme St. Amandsberg, Belgium
Professor Andrew W. Warren
Department of Geography University of London London, UK
Dr. Henning Wegener
Ambassador of Germany (ret.) Information Security PMP Madrid, Spain
Dr. Jody Westby
The Work-It Group Denver, USA
Professor Richard Wilson
Department of Physics Harvard University Cambridge, USA
Professor Aaron Yair
Department of Geography The Hebrew University Jerusalem, Israel
571 Dr. Igor S . Zektser
Hydrology Department Water Problems Institute Moscow, Russia
Professor Antonino Zichichi
CERN & University of Bologna, Italy Geneva, Switzerland