SCALING IN INTEGRATED ASSESSMENT
INTEGRATED ASSESSMENT STUDIES Series Editors:
P. Martens, J. Rotmans International Centre for Integrative Studies, Maastricht University, Maastricht, The Netherlands
Advisory Board: M.B. Beck Environmental Informatics and Control Program, Warnell School of Forest Resources, University of Georgia, Athens, Georgia, USA J. Robinson Sustainable Development Research Institute and Department of Geography, University of British Columbia, Vancouver, Canada H.J. Schellnhuber Potsdam Institute for Climate Impact Research, Potsdam, Germany; Tyndall Centre for Climate Change, Research, Norwich, UK
SCALING IN INTEGRATED ASSESSMENT
Editors Jan Rotmans & Dale S. Rothman
PUBLISHERS LISSE
ABINGDON
EXTON (PA)
TOKYO
Library of Congress Cataloging-in-Publication Data Applied for
This edition published in the Taylor & Francis e-Library, 2005. “To purchase your own copy of this or any of Taylor & Francis or Routledge’s collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” Cover design: ZWAARWATER, Esther Mosselman, Amsterdam, The Netherlands Copyright © 2003 Swets & Zeitlinger B.V., Lisse, The Netherlands All rights reserved. No part of this publication or the information contained herein may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, by photocopying, recording or otherwise, without written prior permission from the publishers. Although all care is taken to ensure the integrity and quality of this publication and the information herein, no responsibility is assumed by the publishers nor the author for any damage to property or persons as a result of operation or use of this publication and/or the information contained herein. Published by: Swets & Zeitlinger Publishers www.szp.swets.nl www.balkema.nl
ISBN 0-203-97100-0 Master e-book ISBN ISBN 90 265 1947 8 (Print Edition) ISSN 1569-299X
CONTENTS ACKNOWLEDGEMENTS ........................................................................... X THE EDITORS ......................................................................................... X THE CONTRIBUTORS ............................................................................. XI FOREWORD ........................................................................................ XIII LIST OF FIGURES ...................................................................................XV LIST OF TABLES ................................................................................. XVII 1 INTRODUCTION ................................................................................... 1 INTRODUCTION ........................................................................................ 1 SETTING THE STAGE ................................................................................ 1 ABOUT THIS BOOK ................................................................................... 2 2 GEOGRAPHIC SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE........................................... 5 HOW SCALE MATTERS ............................................................................. 5 BASIC CONCEPTS ............................................................................... 6 FINDINGS TO DATE ............................................................................ 8 ELEMENTS OF A COHERENT STORY LINE ABOUT HOW SCALE MATTERS ......................................................................................... 13 OPERATIONAL ISSUES ............................................................................ 18 THE PRINCIPAL ALTERNATIVES........................................................ 18 THE PRINCIPAL CHALLENGES........................................................... 21 DIRECTIONS FOR IMPROVING OUR CAPABILITIES ................................... 27 REFERENCES .......................................................................................... 29 3 MICRO/MACRO AND SOFT/HARD: DIVERGING AND CONVERGING ISSUES IN THE PHYSICAL AND SOCIAL SCIENCES ............................................................................ 35 ABSTRACT ............................................................................................. 35 ACKNOWLEDGEMENTS .......................................................................... 35 INTRODUCTION AND OVERVIEW ............................................................ 36 SCALES IN THE SOCIAL SCIENCES: MIXING LEVELS OR WHAT IS THE DIFFERENCE?......................................................................................... 37 SCALES IN THE PHYSICAL SCIENCES: THE CLIMATE SYSTEM................. 40 THERE IS NOTHING AS PRACTICAL AS A GOOD THEORY ........................ 43 THE DIFFERENCES THAT MAKE A DIFFERENCE: SCALES IN CLIMATE CHANGE AND CLIMATE IMPACT RESEARCH........................................... 45 CONCLUSIONS ........................................................................................ 46 REFERENCES .......................................................................................... 47
4 SCALE AND SCOPE IN INTEGRATED ASSESSMENT: LESSONS FROM TEN YEARS WITH ICAM...................................................... 51 ABSTRACT ............................................................................................. 51 ACKNOWLEDGEMENTS .......................................................................... 51 INTRODUCTION ...................................................................................... 52 LESSONS FROM DEVELOPING ICAM...................................................... 54 QUESTIONS, SCOPE AND SCALE ....................................................... 56 PERCEPTIONS, IMPACTS AND ADAPTATION ...................................... 62 SEA LEVEL RISE REVISITED .............................................................. 65 ENERGY MARKETS AND TECHNOLOGICAL PROGRESS....................... 67 SCALE, THE STUDY OF CLIMATE POLICY AND ITS EVOLUTION.......... 69 CONCLUSIONS ON SCOPE AND SCALE .................................................... 71 REFERENCES .......................................................................................... 72 5 SCALING ISSUES IN THE SOCIAL SCIENCES............................... 75 ABSTRACT ............................................................................................. 75 ACKNOWLEDGEMENTS .......................................................................... 75 INTRODUCTION ...................................................................................... 76 SCALING TERMINOLOGY ........................................................................ 77 CONTEXT OF SCALE ISSUES IN SOCIAL RESEARCH................................. 79 SCALE ISSUES IN SOCIAL SCIENCE DISCIPLINES..................................... 82 SCALE ISSUES IN GEOGRAPHY.......................................................... 83 SCALE ISSUES IN ECONOMICS .......................................................... 85 SCALE ISSUES IN ECOLOGICAL ECONOMICS ..................................... 86 SCALE ISSUES IN URBAN STUDIES .................................................... 87 SCALE ISSUES IN SOCIOLOGY ........................................................... 87 SCALE ISSUES IN POLITICAL SCIENCE AND POLITICAL ECONOMY ..... 89 SCALE, SOCIAL SCIENCE DATA COLLECTION, REPRESENTATION AND ANALYSIS ...................................................................................... 93 SOCIAL SCIENCES METHODS ADDRESSING SCALE ................................. 96 CONTEXTUAL ANALYSIS.................................................................. 96 MULTI-LEVEL MODELING ................................................................ 97 HIERARCHY THEORY ....................................................................... 97 MODIFIABLE AREAL UNIT PROBLEM............................................... 97 SCALE, SOCIAL SCIENCE AND INTEGRATED ASSESSMENT MODELING ... 98 CONCLUSION ....................................................................................... 100 REFERENCES ........................................................................................ 101 6 SUSTAINABILITY AND ECONOMICS: A MATTER OF SCALE? .............................................................................................. 107 ABSTRACT ........................................................................................... 107 ACKNOWLEDGEMENTS ........................................................................ 107 INTRODUCTION .................................................................................... 108 RETURNS TO SCALE ............................................................................. 111 INDUSTRIAL DISTRICTS IN THE WORLD ECONOMY .............................. 114 BACK TO THE FUTURE ......................................................................... 116 CONCLUSION ....................................................................................... 121 REFERENCES ........................................................................................ 122
7 SCALES IN ECONOMIC THEORY .................................................. 125 INTRODUCTION .................................................................................... 125 SCALES AND AGGREGATION ................................................................ 126 SPACE AND AGGREGATION IN ECOLOGY ............................................. 128 SPACE AND AGGREGATION IN ECONOMICS .......................................... 129 SPATIAL RESOLUTION AND EMERGING PATTERNS OF LOCATION BEHAVIOUR ......................................................................................... 131 DOES SPATIAL RESOLUTION MATTER IN SCIENTIFIC DISCIPLINES THAT DEAL WITH SPACE?............................................................... 131 GEOGRAPHY .................................................................................. 132 REGIONAL ECONOMICS .................................................................. 133 CONCLUSION ....................................................................................... 136 REFERENCES ........................................................................................ 136 8 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS: FROM POINTS UPWARD AND FROM GLOBAL MODELS DOWNWARDS ............................................... 139 ACKNOWLEDGEMENTS ........................................................................ 140 INTRODUCTION .................................................................................... 140 SCALING UP IMPACTS .......................................................................... 141 SITE-DRIVEN APPROACHES ............................................................ 141 UNIFORM GRID APPROACH............................................................. 145 SPATIALLY COMBINED: UNIFORM GRIDS WITH RELATIONAL SOILS 148 SPATIAL INTERPOLATION APPROACH ............................................. 154 STOCHASTIC SPACE APPROACH...................................................... 158 DOWNSCALING .................................................................................... 162 OVERVIEW OF METHODS ............................................................... 162 ISSUES IN SCALING METHODS.............................................................. 167 INPUT DATA ................................................................................... 167 TECHNICAL EXPERTISE .................................................................. 167 VALIDATION .................................................................................. 168 UNCERTAINTY AND RISK ............................................................... 168 STAKEHOLDER PARTICIPATION ...................................................... 169 SCALING AGENTS .......................................................................... 169 CONCLUSIONS ...................................................................................... 170 REFERENCES ........................................................................................ 170 9 STRATEGIC CYCLICAL SCALING: BRIDGING FIVE ORDERS OF MAGNITUDE SCALE GAPS IN CLIMATIC AND ECOLOGICAL STUDIES ........................................................ 179 SCALING PARADIGMS IN MODELING COUPLED SYSTEMS .................... 179 ECOLOGICAL RESPONSES TO CLIMATE CHANGES AS SCALING EXAMPLES ........................................................................................... 183 SCALING ANALYSIS OF ECOLOGICAL RESPONSES ................................ 187 INTEGRATED ASSESSMENT VIA COUPLED SOCIO-NATURAL SYSTEMS MODELS ............................................................................................... 193 CONCLUSIONS ...................................................................................... 197 REFERENCES ........................................................................................ 198
10 THE SYNDROMES APPROACH TO SCALING ............................. 205 ABSTRACT ........................................................................................... 205 DEALING WITH GLOBAL CHANGE – THE SCALING PROBLEM AS ONE CRUCIAL ASPECT OF COMPLEXITY AND UNCERTAINTY ............... 205 IDEALISTIC DEDUCTION VERSUS REALISTIC INDUCTION...................... 207 SYNDROMES II: MODELLING, SPATIAL AND FUNCTIONAL SCALE OF VALIDITY ............................................................................................. 216 QUALITATIVE DIFFERENTIAL EQUATIONS – FORMALIZING COARSE FUNCTIONAL SCALES ..................................................................... 216 GENERAL HAZARDOUS FUNCTIONAL PATTERNS AND DETAILED LOCAL CASE STUDIES ..................................................................... 221 THE EXTENDED SAHEL HFP AND SPATIAL DISTRIBUTION OF TIME BEHAVIOURS.................................................................................. 227 CONCLUDING REMARKS ...................................................................... 230 APPENDIX A: IMPORTANT TERMS OF THE SYNDROME CONCEPT ......... 232 APPENDIX B: SYMBOLS USED FOR THE GRAPHICAL REPRESENTATION OF QUALITATIVE MODELS ................................................................... 232 REFERENCES ........................................................................................ 234 11 POLYCENTRIC INTEGRATED ASSESSMENT ............................. 237 ABSTRACT ........................................................................................... 237 INTRODUCTION .................................................................................... 238 THE DECISION PERSPECTIVE ................................................................ 238 NOVEL APPROACHES TO DECISION MAKING ................................... 240 THE IMPORTANCE OF SCALES .............................................................. 243 ENVIRONMENT HUMAN INTERACTIONS ACROSS SCALES ................ 243 CLIMATE CHANGE AND WATER RESOURCE MANAGEMENT FROM A SCALING PERSPECTIVE IN TIME AND SPACE ................................. 245 CLIMATE CHANGE AND BEYOND ......................................................... 248 SUSTAINABLE MANAGEMENT OF WATER RESOURCES ........................ 252 SOCIETAL TRANSITIONS AND LOCK-IN EFFECTS ............................. 253 MARKET BASED INSTITUTIONS FROM LOCAL TO GLOBAL SCALES .. 255 SUMMARY AND CHALLENGES FOR INTEGRATED ASSESSMENT ............ 257 REFERENCES ........................................................................................ 259 12 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMENTAL MODELING – ARE THERE ANY?............... 263 ABSTRACT ........................................................................................... 263 SCALES IN CLIMATE CHANGE IMPACT ASSESSMENT ........................... 264 HUMAN-ENVIRONMENT SYSTEMS AS COMPLEX AND HIERARCHICAL . 267 EMERGENCE: REAL OR IMAGINED?...................................................... 269 HIERARCHICAL EMERGENCE.......................................................... 271 SELF-ORGANIZATION AS A DYNAMICAL THEORETICAL BASIS FOR SCALE-RELATED EMERGENCE ............................................................. 275 AN APPLICATION TO THE PROBLEM OF THE VULNERABILITY OF THE USA AGRICULTURAL PRODUCTION SYSTEM TO CLIMATE CHANGE…..................................................................................... 278
EMERGENT PROPERTIES, VULNERABILITY AND RESILIENCE OF LAND USE SYSTEMS WITH ENVIRONMENTAL FORCING: THE CASE OF HURRICANE MITCH AND HONDURAN AGRICULTURE ...................... 282
ARE ISSUES OF SCALE AND SURPRISE CONNECTED?............................ 285 CONCLUSION ....................................................................................... 286 REFERENCES ........................................................................................ 287 13 COMPLEXITY AND SCALES: THE CHALLENGE FOR INTEGRATED ASSESSMENT ......................................................... 293 ABSTRACT ........................................................................................... 293 ACKNOWLEDGMENTS .......................................................................... 293 INTRODUCTION – THE EPISTEMOLOGICAL DIMENSION OF COMPLEXITY ....................................................................................... 293 PART 1 – HOLARCHIES, NON-EQUIVALENT DESCRIPTIVE DOMAINS, AND NON-REDUCIBLE ASSESSMENTS .................................................. 296 SELF-ORGANIZING SYSTEMS ARE MADE OF NESTED HIERARCHIES AND THEREFORE ENTAIL NON-EQUIVALENT DESCRIPTIVE DOMAINS ....................................................................................... 296 HOLONS, HOLARCHIES AND NEAR-DECOMPOSABILITY OF HIERARCHICAL SYSTEMS ............................................................... 300 THE EPISTEMOLOGICAL PREDICAMENTS IMPLIED BY THE AMBIGUOUS IDENTITY OF HOLARCHIES ......................................... 304 BIFURCATION, EMERGENCE AND SCIENTIFIC IGNORANCE .............. 307 NON-REDUCIBILITY (MULTIPLE CAUSALITY) AND INCOMMENSURABILITY ................................................................. 312 PART 2 – IMPLICATIONS OF COMPLEXITY AND SCALES ON INTEGRATED ASSESSMENT .................................................................. 319 THE EPISTEMOLOGICAL PREDICAMENT OF SUSTAINABILITY ANALYSIS ...................................................................................... 319 A NEW CONCEPTUALIZATION OF “SUSTAINABLE DEVELOPMENT”: MOVING FROM “SUBSTANTIAL” TO “PROCEDURAL” RATIONALITY ..... 321 CONCLUSION ....................................................................................... 323 REFERENCES ........................................................................................ 324 14 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?................................................................................... 329 INTRODUCTION .................................................................................... 329 WHAT IS THE PROBLEM? ...................................................................... 330 SCALING IN IA-MODELS ...................................................................... 331 GRID-CELL BASED MODELLING...................................................... 332 CELLULAR AUTOMATA MODELLING .............................................. 333 MULTIPLE-SCALE REGRESSION MODELLING .................................. 333 SCALING IN IA-SCENARIOS .................................................................. 334 AGENTS AND SCALE ............................................................................. 339 UNCERTAINTY AND SCALE ................................................................... 343 IS THERE A SOLUTION?......................................................................... 346 CONCLUSIONS AND RECOMMENDATIONS ............................................ 350 REFERENCES ........................................................................................ 352
ACKNOWLEDGEMENTS Many people contributed directly or indirectly to this book. First of all, we would like to thank the authors for their efforts and patience and for the fruitful discussions we had during the original workshop. Furthermore, we would like to thank all of our colleagues at the International Centre for Integrative Studies (ICIS) at Maastricht University for their help in finalising this book; special thanks to Frank Nelissen for his tireless effort in formatting the manuscript and Caroline van Bers for her incomparable organisation of the original workshop. Finally, we thank all of our colleagues from institutes and universities world-wide who have given us feedback on earlier versions of the chapters. Jan Rotmans & Dale S. Rothman
THE EDITORS Jan Rotmans and Dale S. Rothman are at the International Centre for Integrative Studies (ICIS), Maastricht University.
THE CONTRIBUTORS M. Bindi, DISAT, University of Florence, Florence, Italy R.J. Brooks, IACR Long Ashton Research Station, University of Bristol, Bristol, UK R.E. Butterfield, Environmental Change Institute, University of Oxford, Oxford, UK T. R. Carter, Finnish Environment Institute, Helsinki, Finland R. Delécolle, INRA – Unité de Bioclimatologie, Avignon, France T.E. Downing, Environmental Change Institute, University of Oxford, Oxford, UK Hadi Dowlatabadi, Sustainable Development Research Institute, University of British Columbia, Vancouver, British Columbia, Canada William E. Easterling, Department of Geography and Center for Integrated Regional Assessment, The Pennsylvania State University, United States Tom P. Evans, Department of Geography, Center for the Study of Institutions, Population and Environmental Change, Indiana University, Bloomington, United States Mario Giampietro, National Institute of Research on Food and Nutrition (INRAN), Unit of Technological Assessment, Rome, Italy Clark Gibson, Department of Political Science, University of California, San Diego, United States Z. S. Harnos, Department of Mathematics and Informatics, University of Horticulture and Food Industry, Budapest, Hungary P. A. Harrison, Environmental Change Institute, University of Oxford, Oxford, UK A. Iglesias, Escuela Tecnica Superior de Ingenieros Agronomos, Cuidad Universitaria, Madrid, Spain Carlo C. Jaeger, Department of Global Change and Social Systems, Potsdam Institute of Climate Impact Research, Potsdam, Germany Kasper Kok, International Centre for Integrative Studies, University of Maastricht, The Netherlands M.K.B. Lüdeke, Potsdam Institute for Climate Impact Research, Potsdam, Germany S. Moss, Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK M. New, School of Geography and the Environment, University of Oxford, Oxford, UK J.E. Olesen, Department of Crop Physiology & Soil Science, DIAS, Research Centre Foulum, Tjele, Denmark J.L. Orr, Scottish Natural Heritage, Edinburgh, Scotland Elinor Ostrom, Department of Political Science, Center for the Study of Institutions, Population and Environmental Change, Indiana University, Bloomington, United States Henriëtte Otter, University of Twente, The Netherlands
Claudia Pahl-Wostl, Interdisciplinary Institute for Environmental Systems Science, University of Osnabrück, Germany G. Petschel-Held, Potsdam Institute for Climate Impact Research, Potsdam, Germany J. Porter, Department of Agricultural Sciences, Royal Veterinary and Agricultural University, Taastrup, Denmark Terry L. Root, Center for Environmental Science and Policy, Institute for International Studies, Stanford University, United States Stephen H. Schneider, Department of Biological Sciences and the Institute for International Studies, Stanford University, United States H.-J. Schellnhuber, Potsdam Institute for Climate Impact Research, Potsdam, Germany M. A. Semenov, IACR Long Ashton Research Station, University of Bristol, Bristol, UK Nico Stehr, Kulturwissenschaftliches Institut, Essen, Germany Richard Tol, Centre for Marine and Climate Research, Hamburg University, Hamburg, Germany; Institute for Environmental Studies, Vrije Universiteit, Amsterdam, The Netherlands; Centre for Integrated Study of the Human Dimensions of Global Change, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States Anne van der Veen, University of Twente, The Netherlands Hans von Storch, Institute for Coast Research, GKSS Forschungszentrum, Geesthacht, Germany Thomas J. Wilbanks, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States J. Wolf, Department of Theoretical Production Ecology, Wageningen Agricultural University, The Netherlands
FOREWORD The past decades have seen the rise of a new academic field, Integrated Assessment, in which different strands of knowledge are combined in order to better represent and analyze real world problems of interest to decision-makers. For many, if not most, of these problems this means dealing with complex issues that operate at multiple scale levels. Especially the choices made in assessment regarding the geographic scale and the time scale of the issues considered and the inherent limitations involved, require more attention than given so far. Just as addressing problems from the perspective of a single discipline or sector can result in an incomplete and often problematic picture of society and societal concerns, so can focusing on a single scale and ignoring the interactions between and across scales. The ultimate effect is that decisions will be made and actions taken that may be inappropriate and, at times, coun-terproductive. For this reason, the European Forum on Integrated Environmental Assessment (EFIEA) was pleased to host a Policy Workshop on Scaling Issues in Integrated Assessment. This workshop, held from 12-19 July 2000 in the town of Mechelen, close to Maastricht in the Netherlands, brought together European and international experts who have been addressing these issues from many different perspectives. The papers in this volume were either prepared for or grew out of this workshop. As such, they provide a reflection of the state-of-the-art in addressing the issue of scaling in Integrated Assessment. They show that whereas increased effort has been exerted in this direction, there remains much to accomplish. For example the workshop illustrated that it is not just the geographic and the time scale that influence the set-up and the outcome of an assessment, the “scale” and range of institutions considered is an issue that requires explicit attention of integrated assessment researchers and practitioners. This volume presents the broad range of scaling issues relevant in Integrated Environmental Assessment. As such, it lays the foundation of further work in this important area of Integrated Assessment. Pier Vellinga Chair of EFIEA
LIST OF FIGURES Figure 2.1: Scale-dependent distribution of impacts of climate change ......... 8 Figure 2.2: Scale domains of climate change and cosequences .................... 11 Figure 2.3: Trajectories from global-scale expert analysis to local action.... 13 Figure 2.4: Macroscale/microscale interactions in global change................. 14 Figure 2.5: The GCLP concept ..................................................................... 15 Figure 2.6: Oak Ridge Climate Impact Response (CLIR) Model ................. 18 Figure 2.7: Oak Ridge approach for scale integration................................... 26 Figure 3.1: Scales in the atmospheric dynamics. .......................................... 40 Figure 3.2: a) EBM without noise, b) with noise. ......................................... 42 Figure 4.1: A global influence diagram of the climate change problem ....... 60 Figure 4.2: Local trends in relative sea level ................................................ 66 Figure 6.1: Sum of angles in a Euclidean triangle. ..................................... 109 Figure 7.1: Processes, interactions, analytical scale and traditional hierarchy in economics. ...................................................................... 130 Figure 8.1: Approaches for representing spatial variability in models: ...... 142 Figure 8.2: Comparison of the mean duration of ........................................ 148 Figure 8.3 Change in mean water-limited wheat yield ............................... 149 Figure 8.4. Schema showing the approaches for the spatial application of crops models ................................................................................... 152 Figure 8.5: Comparison between simulated model output variables (Sim) and the observed statistical data (Obs)................................................ 158 Figure 8.6: Flow chart showing the three types of calibration undertaken for STICS-Wheat and their application (Source: Delécolle [37]). ...... 160 Figure 8.7: Simulated distributions of yield for the observed climate ........ 161 Figure 9.1: “Cliff diagram” of equilibrium THC overturning varying ....... 196 Figure 10.1: Hypothesised process of the deduction of Hazardous Functional Patterns ............................................................................ 209 Figure 10.2: Approaches to identification of Hazardous Functional Patterns............................................................................................... 211 Figure 10.3: Network of interrelations for the Sahel-Syndrome-generating functional pattern ................................................................................ 213 Figure 10.4: Structure of the algorithm for calculating the disposition towards the SAHEL Syndrome, ............................................................ 214 Figure 10.5: Disposition towards the SAHEL SYNDROME under the present climate................................................................................................. 215 Figure 10.6: Climate sensitivity of the disposition towards the SAHEL SYNDROME (1/)................................................................................. 215 Figure 10.7: Basic relation for the didactic model to explain the qualitative modelling approach............................................................................. 218 Figure 10.8: Qualitative behaviours of the simple didactic model for a general logistic growth........................................................................ 219 Figure 10.9: Core mechanism of the original version of the SAHELSYNDROME. ......................................................................................... 222 Figure 10.10: Scheme of generalisation used to formulate a class of civilisation-nature interactions............................................................ 223
XVI LIST OF FIGURES
Figure 10.11: General scheme of case study integration into a common class of causes and effects................................................................... 224 Figure 10.12: “Behaviour tree" for the original Sahel HFP of causes and effects.................................................................................................. 226 Figure 10.13: Enhancement of the simple Sahel-HFP ................................ 228 Figure 10.14: Resulting behavior tree for the enhanced Sahel HFP ........... 229 Figure 11.1: Representation of the pathways relevant for decision making ................................................................................................ 240 Figure 11.2: A generic approach to represent decision making .................. 242 Figure 11.3: Different levels in a hierarchical systems. .............................. 244 Figure 11.4: Comparison of the PSI sequence ............................................ 246 Figure 11.5:Overview of the measures catalogue included in OPTIONS... 250 Figure 11.6: Lock-in effect preventing the spread of an innovation ........... 251 Figure 11.7: Determinants for lock-in effects in urban water management. ....................................................................................... 254 Figure 11.8: Some urban-rural couplings in urban water management. ..... 255 Figure 11.9: Different areas of research in agent based modeling .............. 258 Figure 12.1: Hypothetical aggregation error by up-scaling non-linear relations between crop yield and precipitation.................................... 266 Figure 12.2: The Human-Environment System .......................................... 268 Figure 12.3: Levels of a hierarchy .............................................................. 269 Figure 12.4: Cluster (square) aggregation method...................................... 272 Figure 12.5: Linear aggregation method..................................................... 273 Figure 12.6: Complexity versus randomness. ............................................. 276 Figure 12.7: Substitution of energy for labor in American agriculture in the 20th century. ............................................................................. 279 Figure 12.8: Four land-use system functions and the flow of events between them ...................................................................................... 283 Figure 12.9: Short-term and long-term effects of hurricane Mitch on cover percentage of maize in Honduras. ............................................. 284 Figure 13.1: Non-equivalent descriptive domains needed to obtain non-equivalent pattern recognition in nested hierarchical systems..... 298 Figure 13.2: Multi-objective integrated representation of the performance of a car. .......................................................................... 317 Figure 14.1: The IPCC SRES scenarios as branches of a twodimensional tree. ................................................................................. 335 Figure 14.2: Different scale levels of agent representation......................... 342 Figure 14.3: Multiple-scale representation of an institutional agent........... 343 Figure 14.4: Typology of sources of uncertainty ........................................ 344 Figure 14.5: Four phases of the transition curve........................................ 349
LIST OF TABLES Table 4.1: Four successive generations of ICAM ......................................... 55 Table 4.2: Climate questions climate change policy..................................... 58 Table 5.1: The relationships of analytical levels of human choice and geographic domains .............................................................................. 92 Table 8.1: Modelled mean, maximum and minimum yields calculated for MAFF’s Government Office Region .................................................. 153 Table 8.2: Approaches to downscaling ...................................................... .167 Table 10.1: Syndromes of global change.................................................... 212 Table 10.2: Comparison of important features of conventional modelling with ordinary differential equations (left) and qualitative modelling (right) using QDEs.............................................................................. 220 Table 12.1: Southeastern USA simulated corn yield response to 1960–1995 observed climate and CSIRO climate change (2×CO2) at different levels and shapes of units of aggregation: yields averaged over time and aggregation units ......................................................................... .274 Table 13.1: Multiple scientific explanations for a given event ................... 315 Table 13.2: Example of an impact matrix ................................................... 318
1 Introduction DALE S. ROTHMAN International Centre for Integrative Studies, University of Maastricht, The Netherlands
Introduction From 12–19 July 2000, the European Forum on Integrated Environmental Assessment (EFIEA) hosted a Policy Workshop on Scaling Issues in Integrated Assessment in the town of Mechelen, close to Maastricht in the Netherlands. This was the second of the so-called “Matrix” workshops, which addressed specific methodological topics relevant to Integrated Environmental Assessment in the context of policy-relevant issues. In each workshop, a group of approximately 50–60 participants, including EFIEA-members and other scholars, were brought together to explore theoretical and methodological issues and to share practical experiences. The first Matrix workshop was held in July 1999 in Baden close to Vienna, where the focus was on uncertainty. In the second workshop, the focus of this volume, the emphasis was on the issue of scaling in Integrated Environmental Assessment. Thus, it is commonly referred to as the EFIEA Scaling Workshop. Setting the Stage The issue of scale, both in time, space and quantity, is of fundamental importance in the field of Integrated Assessment. By definition Integrated Assessment deals with complex issues that operate at multiple scale levels. Within the natural sciences the scale-problem has already played an important role for some time. For many social scientists the scale issue is a relatively new area of concern, although its importance is increasingly recognized. Insights from both the social and natural sciences are of crucial importance in understanding the complex relationships between humans and the natural environment. There is a growing need for interdisciplinary approaches to scaling issues: approaches that combine insights from both the natural and social sciences. These interdisciplinary approaches can pave the way for a more common understanding of the role of scale in many current societal problems.
2 INTRODUCTION
To date, no grand ‘scale theories’ or standard procedures have been developed that allow integrated assessors to deal with different and multiple spatial scales, and with the short and long term in an appropriate and qualified manner in their assessment endeavors. The aim of the workshop therefore was to address this observed need and to take a significant step towards the development of heuristics, procedures and tools to address spatial and temporal scale issues. The workshop focused on various aspects of scales in Integrated Assessment with respect to data/indicators, models and scenarios. For each of these topics, the available theories and practical methods were screened for their contribution to Integrated Assessment. The workshop format combined lectures on topical issues around scale with more applied work sessions. Ten speakers, well known in the field of integrated assessment and modeling, prepared and presented papers during morning sessions. Building on these presentations, afternoon works sessions focused on topical issues related to several key themes: crossscale interactions; up and downscaling (including aggregation and disaggregation), scaling and modeling, scaling and scenarios, and scaling and indicators/ data. Based on these sessions, scaling concepts were clarified and further refined, and a research agenda was developed for subsequent explorations in scale management. This volume represents the key tangible output of the Scaling Workshop. In addition to the ten papers prepared and later refined by the key speakers, three other papers prepared by participants at the workshop are also included here. These papers reflect the wide range of topics that were addressed during this meeting. They point to a number of unresolved issues in the field of IA and point towards important areas for further research. Unfortunately, as the attendees will certainly attest, they cannot fully capture the breadth and depth of these discussions, along with the enthusiasm of the participants.
About this Book Wilbanks sets the stage by indication why scale matters in pursuing integrated assessments (IAs). Driving forces of environmental change come from and interact across different scales. He also raises the issue of agency and structure, i.e. the ability of individuals and groups to take action, but always under constraints. He also discusses operational issues of incorporating microand macro-scale information and perspectives in Integrated Assessment Models (IAMs). He lays out several key challenges related to data availability, upscaling, downscaling, integration, and cross-scale dynamics, all of which come back in later chapters. Stehr and von Storch point out the different approaches the social and physical sciences have take to addressing scale issues. Within the social sciences, debates have focused on macro-micro and agent-structure issues. The physical sciences provide a clearer hierarchical structure in the dorm of a cascade of spatial
SCALING IN INTEGRATED ASSESSMENT 3
(and temporal) scales. The key element they address, thought, is the often missing link between the analytical and practical capacity of knowledge. “The distinction between analytical and practical is particularly relevant to actors who have to deal with and convert scientific knowledge claims into practical action. Thus, choices of scale not only affect what can or will be analyzed but also what can or will be done. Dowlatabadi offers us a personal journey. Most significantly he draws in the importance of scale with respect to human cognitive processes – perception and awareness, and human social organization and the associated ability to act. Similar to Stehr and von Storch, these are seen to differ fundamentally from the usual physical dimensions. He draws upon insights into the scale or participation required versus that actually seen for climate policies – mitigation, adaptation, and geo-engineering. He also talks about meeting energy needs. Evans, et al. go further in exploring in more detail the role of scale in various social sciences. Unfortunately, but perhaps to be expected, they do not find a grand solution or even a consensus on either approaches or even definitions. They do point to common areas of concern, however. Also they nicely summarize what could be the holy grail of IAMs – “spatially explicit models that elegantly handle dynamic relationships and human decision making”. Jaeger and Tol point to the need for economic analyses to more deeply address laws and patterns that govern economic processes at different spatial, temporal, and institutional scales. This goes beyond what has been done before in the areas of micro- vs. macro- and short-run vs. long-run analyses. They point, for example, to the key role of increasing returns to scale. Van der Veen and Otter also focus on the issue of scale in economics, but approach it from a different angle. Focusing more specifically on regional economics, they emphasize the difficulties in understanding spatial resolution and human behavior in a uniform construct. Like Jaeger and Tol, they also note the somewhat arbitrary divisions between micro-, meso-, and macroeconomics. Downing, et al. take us into the practical issues of scale in terms of upscaling and downscaling in studies of the impacts of climate change and variability on agriculture in Europe. They show us that there is still much to learn in this area and how current practices can introduce additional uncertainties. May be worse when add agents??? Schneider and Root propose a more general strategy for bridging gaps related to scale, in particular geographical scale. In their approach, Strategic Cyclical Scaling, “large-scale associations are used to focus small-scale investigation in order to develop valid causal mechanisms generating the large-scale relationships.” This is somewhat different from traditional upscaling or down-scaling, which attempt to bring either the higher or lower scale directly into the model. Most significant is the requirement of “the development and fostering of interdisciplinary teams, and eventually, interdisciplinary communities, capable of unbiased peer reviewing of cross-scale, cross-disciplinary analyses in which
4 INTRODUCTION
the bulk of the originality is in the integrative aspects, rather than advances in the sub-disciplines that are coupled.” Schnellnhuber, et al. introduce us to the notion of Hazardous Functional Patters (HFPs) generating non-sustainable trajectories, or Syndromes, of the human nature system. They propose the use of Qualitative Differential Equations (QDEs) to analyze these. In relation to scale, these can help to bridge variability at the local scale and changes at the global or regional scale by identifying common patterns of behavior (or potential behavior) at a more intermediate, or functional scale. They also look at the issue of non-local interactions. Pahl-Wostl brings us more directly to the question of scales other than the traditional ones of time and space. Focusing on the importance of individuals and organizations, she points to the need to pay attention to levels of social organization and points toward agent-based and participatory methodologies. Easterling and Kok take up the challenge of scale in the context of the theory of hierarchical systems. They start from the premise that the systems of interest for Integrated Assessment are inherently nested hierarchical systems, which are “too complex for analytical solution and too structured and organized for pure statistical treatment.” In doing so, they emphasize that this calls for going beyond the preoccupation with bottom-up aggregation and top-down disaggregation. Many relevant properties of a system are emergent, i.e. they are difficult if not impossible to construct or predict from the constituent parts. Similarly, behavior in system components may be constrained by processes operating at a higher scale, in ways that might not be apparent from top-down disaggregation. Thus, it is important for IAMs to try to embed hierarchical structures explicitly. Giampietro also draws from hierarchy theory in his chapter. He emphasizes the importance of perspectives, i.e. how we interact with the system. Specifically, different perspectives, which include different choices of scale, reflect different reasons for analyzing, and can provide equally valid, but non-equivalent descriptions of, the same system. In many cases of IA, it will be necessary to adopt more than a single perspective to reflect both the general complexity of the issue and the different perspectives of different stakeholders. Finally, Rotmans rounds off this volume with a general reflection on the issue of scale in Integrated Assessment. He provides an overview of the challenges, both theoretical and practical, scale issues poses for the field. He also provides recommendations for moving forward even as a wide range of practitioners make initial tentative steps into what is in many cases unknown territory.
2 Geographic Scaling Issues in Integrated Assessments of Climate Change THOMAS J. WILBANKS Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States
If top-down, large-scale integrated assessments of global climate change issues were all we need – sufficient to answer most of the important intellectual and practical questions about climate change impacts and responses – then scaling would not be an important enough topic to justify organizing this workshop. We are learning, however, that answering such questions often requires atten-tion to local scales as well as global, despite very serious operational complications in figuring out how to integrate local-scale analysis into com-prehensive integrated assessment modeling. Recognizing that we are in some respects nosing into territory that, if not entirely new, is still early in a serious exploration process, this paper first reviews how geographic scale matters in integrated assessments of global climate change issues. Next, it looks at operational issues in incorporating a variety of scales and cross-scale dynamics in integrated assessment modeling (overlapping several of the other papers prepared for the workshop). It concludes with some suggestions for research to improve our capabilities in dealing with macro-microscale interactions in global change processes. The intent here is not to dig into a few particular scaling issues in depth but to sketch the landscape of scale-related issues as a contribution to the general workshop discussion, reporting a not entirely integrated assemblage of recent experience that may have some bearing on scaling issues in integrated assessment.
How Scale Matters Our understanding about how scale matters is grounded in a number of basic concepts; it is increasingly informed by ongoing integrated assessment activities; and it can be illustrated by several of these activities.
6 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Basic concepts Understanding relationships between macroscale and microscale processes and phenomena is one of the “grand queries” of science [1], and this great intellectual challenge extends beyond geographic scale alone. Clearly, temporal scale raises equally important issues – i.e., between the short term and the long term – and geographic scale and temporal scale are often related in processes of interest; and organizational scale can also be significant in ways not entirely captured by spatial or temporal scale [2, 3]. Considering geographic or spatial scale in this paper, our thinking is generally shaped by several basic concepts that are not always recognized explicitly. For example, we tend to take the following notions as underlying premises: ■
■
■
■
When arrayed along a scale continuum from very small to very large, most processes of interest establish a number of dominant frequencies; they show a kind of lumpiness, organizing themselves more characteristically at some scales than others (see, for instance, Klemes [4] and Holling [5]). Recognizing this lumpiness, we can concentrate on the scales that are related to particular levels of system activity – e.g., family, neighbor-hood, city, region, and country – and at any particular level subdivide space into a mosaic of “regions” in order to simplify the search for understanding. In many (perhaps most) cases, smaller scale mosaics are nested within larger-scale mosaics; therefore we can often think in terms of spatial hierarchies [5]. As we look across mosaics at different levels of scale and spatial detail, the importance of cross-border linkages increases as the scale shrinks. This generalization clearly applies to external linkages at the particular scale of interest (e.g., multipliers in regional economics). It is not so clear that the generalization applies to the importance of cross-scale linkages: more important at small scales than large? Perhaps not: see below. Place is more than an intellectual and social construct; it is a real context for communication, exchange, and decision-making. More than a decade of research by “post-modernist” scholars has established that place has meaning for local empowerment, directly related to equity, and indeed for personal happiness in the face of space-time compression (e.g., Harvey [6], Smith [7], NAS [8]). Scale is not just an operational abstraction. It has meaning for people and processes, related to forms of social organization.
It is tempting, of course, to speculate about how many generalizations about macro-microscale relationships pertaining to geographic scale might apply to other kinds of scale as well. Consider, for instance, the four concepts above as they might apply to functional scale. Based partly on such concepts, it has been suggested that geographic scale matters in seeking an integrated understanding of global change processes and that understanding linkages between scales is an important part of the search for knowledge [9, 10]. Several of the reasons have to do with how the world
SCALING IN INTEGRATED ASSESSMENT 7
works. First of all, the forces that drive environmental systems arise from different domains of nature and society. For example, Clark has shown that distinctive systems imbedded in global change processes operate at different geographic and temporal scales [11]. Within this universe of different domains, local and regional domains relate to global ones in two general ways: systemic and cumulative [12]. Systemic changes involve fundamental changes in the functioning of a global system, such as effects of emissions of ozone-depleting gases on the stratosphere, which may be triggered by local actions (and certainly may affect them) but which transcend simple additive relationships at a global scale. Cumulative changes result from an accumulation of localized changes, such as groundwater depletion or species extinction; the resulting systemic changes are not global, although their effects may have global significance. A second reason that scale can matter is that the scale of agency – the direct causation of actions – is often intrinsically localized, while at the same time such agency takes place in the context of structure: a set of institutions and other regularized, often formal relationships whose scale is regional, national, or global. Land use decisions are a familiar example. This kind of local-global linkage is especially important where environmental impact mitigation and adaptation actions are concerned, analogous to hazards behavior. A third reason that scale can matter is that the driving forces behind environmental change involve interactions of processes at different locations and areal extents and different time scales, with varying effects related to geographical and temporal proximity and structure. Looking only at a local scale can miss some of these interactions, as can looking only at a global scale. For instance, geographers have shown that processes of change involve patterns of spatial diffusion that can be generalized, and ecological modelers such as Holling have found that managed biomes are characterized by landscapes with lumpy geometries and lumpy temporal frequencies related to the size and speed of process interactions, shaped by the fact that processes operating at different scales tend to show faster or slower dynamics [13]. Several additional reasons have to do with how we learn about the world. One of the strongest is the argument that complex relations among environmental, economic, and social processes that underlie environmental systems are too complex to unravel at any scale beyond the local. A second reason is that a portfolio of observations at a detailed scale is almost certain to contain more variance than observations at a very general scale, and the greater variety of observed processes and relationships at a more local scale can be an opportunity for greater learning about the substantive questions being asked (e.g., Fig. 2.1). In other words, variance often contains information rather than “noise.” A third reason is that research experience in a variety of fields tells us that researchers looking at a particular issue top-down can come to dramatically different conclusions from researchers looking at that very same issue bottomup. The scale embodied in the perspective can frame the investigation and shape the results, which suggests that full learning requires attention at a variety
8 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Figure 2.1: Scale-dependent distribution of impacts of climate change (adapted from Environment Canada [20])
of scales. As one example, Openshaw and Taylor [14] have demon-strated that simply changing the scale at which data are gathered can change the correlation between variables virtually from +1 to 1. These reasons, of course, do not mean that global-local linkages are salient for every question being asked about global change. What they suggest is more modest: that examinations of such changes should normally take time to consider linkages between different scales, geographical and temporal, and whether or not those linkages might be important to the questions at hand. Findings to date Quite a number of recent assessments and studies have offered learning experiences about how geographic scale matters in trying to understand global change and its impacts. Examples from a U.S. perspective include the “Global Change in Local Places” (GCLP) project funded by NASA through the Association of American Geographers, 1996–2000 [1, 9]; the first U.S. National Assessment of Possible Consequences of Climate Variability and Change (NACC), 1997–2000 [15]; the recent U.S. National Academy of Sciences/National Research Council report on pathways for a “sustainability
SCALING IN INTEGRATED ASSESSMENT 9
transition” [16]; and a variety of other activities, including the ongoing work of the Global Environmental Assessment Project (GEA) at Harvard University (e.g., Clark and Dickson [17]), the Long-Term Ecological Research (LTER) network sponsored by the National Science Foundation in the U.S. (e.g., Redman et al. [18]), and the Land-Use/Land Cover Change project jointly sponsored by the IGBP and the International Human Dimensions Programme. Learning from these and other recent research experiences – often related to and drawn from a variety of disciplinary literatures – one can offer some tentative findings about how geographic scale matters, stated as propositions as a basis for discussion. Even (especially?) in an era of globalization, attention to the local end of the spectrum is critically important. ■
■
■
■
Integrative research on complex sustainability issues is best carried out in a place-based context. According to recent reviews of the development of earth system science and global change research in the U.S. [19], the most fundamental change in the past decade has been a recognition that integrative research must be down-scaled and placefocused. This conclusion is reported as an empirically-based finding by both the NAS/NRC sustainability transition report and the U.S. national climate change assessment. Many important global change issues are inherently regional/local rather than global or national in scale. The most salient example is vulnera-bilities to impacts of global or national-scale processes. Clearly, the interest in bottom-up perspectives, or at least in down-scaling topdown perspectives, has grown as the emphasis in global change research has shifted from better understanding atmospheric dynamics toward better understanding impacts of climate change. Figure 2.1, for example, summarizes a key finding from the Canadian climate change assessment [20], that variations in net benefits from climate change appear much more clearly at more detailed scales. Local-scale attention is essential for implementing sustainability actions. It bounds the realistic and the possible in sustainability actions, identifies a wider range of opportunities for action, and assists in establishing effective larger-scale structures [1]. In other words, it helps to make sustainability more achievable. In fact, GCLP has noted a number of undesirable unintended local consequences in the U.S. of one-size-fits-all policy actions at a national scale. Local-scale investigation facilitates assessment as a social process. It encourages and facilitates exchanges of information and understanding between investigators and stakeholders, not just disembodied organizational representatives of stakeholders, which connects the issues with local empowerment, constituency-building, and other aspects of democratic decision-making at a variety of scales.
10 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Sustainability science needs to be sensitive to multiple scales rather than focused on a single scale. ■
■
■
■
Selection of a single scale can frame an investigation too narrowly. Whether the scale is global or local, a single scale of attention tends to focus on issues, processes, data, and theories associated with that scale, when a full, integrated understanding calls for attention to perspectives associated with other scales as well [21] (also see Gallegher and Appenzeller [22]). Moreover, research in a wide variety of fields has shown that the results of analysis can be scale-dependent (e.g., Rosswell et al. [23] and Joao [24]) and that, indeed, the concept of “equilibrium” is inherently scale-dependent in complex systems [25]. Schneider has suggested that different scales may be amenable to different research questions related to a common line of inquiry: e.g., larger scales to seek larger associations, smaller scales to ask “why” (see chapter 9). Phenomena, processes, structures, technologies, and stresses operate at different scales. This means that observations of processes at larger scales may not reveal causal mechanisms needed either to forecast system behavior reliably or to determine appropriate actions (e.g., Jarvis [26]). Conversely, observations at smaller scales may not reveal processes responsible for larger-scale patterns – nor the possibility of “emergent properties” (see chapter 12). It seems especially likely that scale is related to uncertainty and surprises, a central issue in considering climate change. A familiar case in integrated assessment is waste emission and disposal, which often involves processes at multiple scales: from local point-source emission streams to regional emission plumes to national regulatory structures. Moreover, the scale of such factors may be subject to change through time, as in the case of the scale of agricultural production in the U.S. Phillips [27] suggests that for any divergent landscape in earth surface systems, there are at least three scale ranges where fundamental system behavior differs. A particular scale may be more or less important at different points on a single cause-consequence continuum. Figure 2.2 illustrates such a continuum schematically, suggesting that for global climate change processes most emissions and many responses are relatively local, while radiative forcing is clearly global in scale. No single scale is ideal for broad-based investigation. The GCLP project found that arbitrary use of a one degree scale has no intrinsic value (see below), and the U.S. national climate change assessment found that there was no ideal scale for investigating regional impact issues (e.g., more detailed scales were better for stakeholder interaction but demanding in terms of funding, local expertise, and management requirements). In nearly every case, valid arguments can be made for either larger or smaller scales, or for boundary modifications to include or exclude activities of interest that have particular weight and might there-fore have a significant impact on general findings. A particular problem with using a latitude/longitude-
SCALING IN INTEGRATED ASSESSMENT 11
Figure 2.2: Scale domains of climate change and consequences (Source: Kates et al. [1]).
12 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
oriented scale for local studies – whether one degree, half a degree, or some other grid size – is that the scale is unlikely to approximate the scale and boundaries of any significant decision-making unit, although “gridded” approaches are common in ecology and certain other fields. As a general rule, the GCLP experience indicates that, if the intent of a study is to inform decision-making, there is merit in relating the scale of the study to the scale of decision-making units appropriate to the issues of greatest interest (also see Cash and Moser [28]). Improving the understanding of scale dimensions of sustainability calls for certain kinds of research strategies. ■
■
■
Monitoring and data-gathering are needed at multiple scales, including careful attention to appropriate indicators. NACC, the sustainability transition study, GCLP, LTER, and other recent studies have concluded that our existing monitoring systems are inadequate for understanding multiple stresses at multiple scales. Building an effective knowledge base for comprehensive integrated assessment modeling requires fully-integrated observational systems, monitoring multiple variables at multiple scales. In the meantime, the sustainability transition study found no consensus on the appropriateness of existing indicators as a basis for such monitoring approaches [16]. “Protocols” for local-scale studies would improve prospects for aggre-gating their results. One of the most common reservations about bottom-up approaches to local-scale studies is that they usually take the form of case studies that can be exceedingly difficult to aggregate. GCLP suggests that the prospects that local area studies of global change, conducted by different people at different sites, can produce comparable results would be improved by encouraging individual studies to ask similar questions, generate data in similar categories based on similar techniques for measurement or estimation, and make data available in similar formats. Guidelines for such a shared approach can be termed a “protocol.” Unfortunately, at least in the U.S., existing protocols created for analyses at a regional or national scale, such as the U.S. Environmental Protection Agency’s state workbook, are not readily transferable to a local scale. For instance, they often call for data not available at the scales of smaller area units. What is needed, GCLP indicates, is a “process protocol” which describes a process for conducting local area studies that can be followed by study teams with varying resources and other constraints. Using local experts as “gate-keepers” helps in eliciting local knowledge and communicating with local stakeholders. A relatively robust finding in studies of environmental assessment experiences worldwide is that the results of assessments are much more likely to be put to use in local areas if they are channeled through local experts (i.e., the right-hand curve in Fig. 2.3). GCLP found the same thing to be true in the opposite direction as well. Local experts are uniquely suited to assist in accessing local knowledge,
SCALING IN INTEGRATED ASSESSMENT 13
Figure 2.3: Trajectories from global-scale expert analysis to local action.
■
because they are repositories of so much of that knowledge and because their local contact networks – often strength-ened by the presence of former students in local institutions – usually embrace the most important of the local information infrastructures. Effective approaches are needed for integrating top-down and bottom-up perspectives. GCLP, NACC, and other studies indicate that tools for integrating perspectives across spatial scales are still limited, although this is an area of research which is showing considerable creative activity (see “Operational Issues” below). From a modeling point of view, of course, a central issue is handling such integration in scientifically-valid ways that permit replication, along with evaluations of conditionalities and uncertainties.
Elements of a coherent story line about how scale matters If “all science is storytelling,” as we hear from our social theorist colleagues, we should try to turn these individual findings into a coherent story of how the macroscale and the microscale are connected in global change processes. Unsurprisingly, in trying to cover a broad continuum of geographic scales such a story is necessarily immersed in “on the one hand…; on the other hand…” perspectives. For example, we are coming to understand that on the one hand sustainability can only be operationalized for particular places, but on the other hand every place is affected by others. We know that many key actions are local, but most key actions are shaped by broader structures. We know that many of the strongest driving forces are translocal, but we also know that (a) many of the impacts are relatively local and that (b) in a demo-cratic society many of the responses are shaped by a cumulation of local concerns.
14 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Figure 2.4: Macroscale/microscale interactions in global change (Source: Kates et al. [1]).
The beginning of a coherent story, capturing these kinds of complications, is depicted schematically in Figure 2.4. Shaped themselves by external driving forces, local actions have systemic or cumulative impacts on processes that operate at global, national, or large-regional scale. If those impacts are judged to be undesirable or risky, there may be institutional responses at those larger scales, leading to structures designed to assure sustainability. That process, in turn, is shaped – at least in democratic societies – by support and/or opposition at local scales. The structures then provide enablement, constraints, and/or incentives to stimulate adaptive behavior at a local scale, leading to changes in local processes and actions; and the cycle continues. This picture is only offered as a basis for discussion, but it is evocative enough to suggest certain implications. For example, it suggests that actions aimed at driving forces need a larger-scale context, while actions aimed at impact reduction/adaptation need a smaller-scale context. It suggests that sustainability is grounded in linkages between different scales of concern. Taking this logic one more step, one might suggest that an over-emphasis on top-down forces can threaten sustainability by provoking backlash from disenfranchised local stakeholders, by being insensitive to local context, and by failing to empower local creativity. At the same time, an over-emphasis on bottom-up forces can also threaten sustainability by missing the importance of larger-scale driving forces, by being insensitive to larger-scale issues (temporal as well as spatial), and by being uninformed about linkages between places and scales. This
SCALING IN INTEGRATED ASSESSMENT 15
indicates a need for balance and harmony in a multiscale, interrelated system for assessment and action, when in so many cases philosophies, processes, structures, and knowledge bases are lacking to assure such a balance. An illustrative example: I One example of an effort to explore such interactions is the Global Change in Local Places research project of the Association of American Geographers, funded by what was at the outset NASA’s Mission to Planet Earth program. This project was focused on the challenge of linking scales in understanding global change. Conceived and designed in 1994 and 1995, it was concerned with three aspects of global change research at that time (Fig. 2.5): ■
■ ■
Changes in human activities that alter GHG emissions and uptakes and surface albedo Driving forces for these changes Capacities of localities to mitigate and adapt to changes
Figure 2.5: The GCLP concept (Source: Wilbanks and Kates [9]).
If the project had been designed a few years later, of course, it would have included a fourth aspect as well: local impacts of global change. At the time,
16 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
however, the capacity to forecast climate change impacts at a regional scale was still quite limited, roughly five degrees latitude-longitude; and a climate change impact dimension of a much more localized study appeared infeasible. Initially, GCLP included three local study areas defined at a scale of approximately one degree (equatorial) latitude-longitude: the Blue Ridge – Piedmont area of Western North Carolina; a portion of the Central Great Plains in Southwestern Kansas, underlain by the Ogallala aquifer; and a portion of the traditional U.S. manufacturing belt in Northwestern Ohio. A fourth study area was added later – a six-county area in the vicinity of Pennsylvania State University in Central Pennsylvania – taking advantage of a strong overlap between the aims and approaches of GCLP and research activities already underway in Penn State’s Center for Integrated Regional Assessment. As GCLP proceeded, it was linked with a number of other activities also concerned with macroscale-microscale interactions in global change processes, such as NACC, the NAS/NRC sustainability transition study, GEA, and the evolution of the LTER concept in the U.S. In particular, it joined forces with the Cities for Climate Protection (CCP) program of the International Council for Local Environmental Initiatives (ICLEI), which had developed an Internetbased approach for assessing potentials for GHG emission reductions by cities and metropolitan areas that is now in use in more than 300 cities worldwide [29]. Findings from GCLP are still emerging, such as its analysis of potentials for the local study areas to meet hypothetical emission reduction targets in 2020; but some of the tentative findings may be of interest. Simply stated, the project found that local knowledge is important, albeit not for everything. The familiar slogan “Think globally and act locally” is inadequate because global or even national knowledge averages together too many distinctive local trajectories of action and change, missing potential response opportunities and making local actions more difficult. Local knowledge, however, is also inadequate, since for the most part the locus of decisions related to climate change responses is not locally-based. In general, GCLP found that local greenhouse gas (GHG) emissions are not greatly different from national patterns; the importance of a local scale of attention lies not in the big picture of emissions but in the details, and these details are especially important in understanding both trends through time and in identifying opportunities for local action. In the four GCLP sites, GHG emission details mainly reflected five factors: the location and fuel use of electricity generation, the degree to which the local economy has a natural resource orientation, the dynamics of local economic development, changes in technology through time, and growth rates in the number of households. Driving these factors are such underlying processes as consumer demand, regulation, energy supply and price, economic organization, and social organization. Within these contexts, the potential for local action to reduce GHG emissions is considerable, if: there is a conviction based on the local context
SCALING IN INTEGRATED ASSESSMENT 17
that such action is a good idea, there is some local control over significant emission decisions, and the locality has access to technological and institutional means to make a difference. On the other hand, the GCLP local area studies found that the current institutional framework in the U.S. does not motivate and facilitate local action, and the portfolio of technology opportunities is often a poor fit with local emission abatement potentials. An illustrative example: II A very different kind of example is a current research project at the Oak Ridge National Laboratory (ORNL), supported by internal discretionary research funds. Initiated in October 1999, this three-year project – labeled “Improving the Science Base for Evaluating Energy and Environmental Alternatives” – is intended to improve the tools available for comparing benefits and costs of global climate change impact avoidance (i.e., GHG emission abatement) with benefits and costs of global climate change impact adaptation. Essentially, this project includes three main components: (1) developing and characterizing a taxonomy of adaptation pathways as a basis for comparison with available characterizations of mitigation pathways (most notably US DOE [30]); (2) improving the science base for pathway analysis, emphasizing macroscale/microscale integration and portfolio optimization (rather than optimization in terms of individual pathways based on a conventional supply curve); and (3) tool development for comparative analysis (Fig. 2.6 is a preliminary indication of the general structure of the tool). Even though the project is still in its early stages, scaling issues have already emerged as central to the activity. For instance (as a broad generalization), the benefits of GHG emission abatement are spread globally through their contributions to reducing the rate of increase in carbon concentrations in the earth’s atmosphere. Benefits of most adaptation pathways, on the other hand, tend to be associated with regionally or locally specific impact vulnerabilities, and therefore to generate benefits at a relatively local scale. This suggests that the results of a comparison of avoidance and adaptation pathways will depend on the scale of the analysis: a macroscale favoring avoidance actions and a microscale favoring adaptation actions. To the degree that climate change policy depends on intra-country political processes and thus net benefits at a regional or local scale, this may hint that adaptation will be favored by some of the key national players in global change policymaking in years ahead. Such a possibility, of course, is one reason for seeking an analytical approach which will produce an optimal course of action that is portfolio-oriented, including a combination for both adaptation and avoidance pathways. Incidentally, it also appears that the temporal scale of avoidance benefits and costs is longer, perhaps considerably longer, than the temporal scale of adaptation benefits and costs. Here again, there is a need to integrate both macroscale and microscale perspectives.
18 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Figure 2.6: Oak Ridge Climate Impact Response (CLIR) Model (Source: Wilbanks et al. [31]).
Operational Issues As an operational question, determining how to incorporate macroscale and microscale information and perspectives into integrated assessment models depends on the conceptual approach that is adopted, but it (ideally) involves two fundamental dimensions: incorporating information from multiple scales and incorporating information about interactions between scales. The principal alternatives Although the conceptual approaches that can be considered are probably as numerous as the investigators using them, at a very general level they can be categorized in one of two ways: (a) convergence at a single “meso” or regional scale or (b) seeking a multi-scale or meta-scale synthesis of insights from a number of scales. A third alternative, of course, is to continue to model at a global scale but to make an effort to aggregate data and process understandings from smaller scales [32]. Perhaps convergence approaches imply a perspective that process representations can be considered seamless across scales, while multi-scale or meta-scale perspectives imply a rejection of that point of view [33].
SCALING IN INTEGRATED ASSESSMENT 19
Convergence at a single “meso” or regional scale The most common approach is to integrate scale-related information at an intermediate scale as a way to provide a transition among various scales, either by converting data to a common geographic metric, by solving separately at different scales and then iterating to convergence, or by relying on empirical information about the scale of particular regional processes of interest: Conversion to a common metric One common approach, possibly the most-often used analytical strategy at present in climate change impact/response studies, is to down-scale information about global processes (such as global climate change) and up-scale information about local processes (such as agricultural production) to meet at an intermediate scale [34]. The current frontier appears to be a scale of one-half degree latitude-longitude, or a cell of about 50 km on a side, with the principal driving factor being limits on the down-scaling of climate change forecasts. Generally, the strategy is either to focus on the smallest scale that is feasible with available data sets or to determine the appropriate scale based on statistical analyses. In trying to define the most appropriate scale, one approach has been to try to find the scale at which data related to a particular question show maximum inter-zonal variability and minimum intra-zonal variability. Another identifies the scale, which minimizes statistical error between observed and modeled phenomena [35]. Still another seeks to balance the increased information from finer spatial resolution against the increased difficulty of gathering the information and modeling the processes [36]. Examples of such work include a variety of efforts by Linda Mearns of the U.S. National Center for Atmospheric Research (NCAR) and others to explore effects of climate variability and change on agriculture in the U.S., especially in the Southeast (e.g., Mearns et al. [37]). Among the challenges noted from this experience is mismatches in the scale resolution of different data sources [38]. Iterating to convergence An alternative is to use different analytical models to derive solutions at different scales and then to iterate back and forth until the results converge. This approach has been widely practiced, at least informally and qualitatively, for much of the past quarter-century. A recent example is incorporated in IIASA’s integrated assessment model for examining energy–economy– environment interactions [39]. One component of this modeling structure couples a top-down macroeconomic model, 11R, modified from the Global 2100 model developed by Manne and Richels, with a bottom-up dynamic LP model, MESSAGE III, that selects cost-minimizing technology combinations. The model-linking approach, as described by Wene [40], is based on iterative adjustments of aspects of the two different models until harmonization of their results is achieved.
20 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Empirical evidence of the scale of regional processes Still another approach is to select an intermediate or mesoscale for integrative analysis on the basis of case-by-case empirical evidence and qualitative understandings of the process involved, rather than based on formal modeling conventions or statistical analyses per se (e.g., Hirschboeck [41]). Although this approach can be difficult to capture in formal modeling logic and difficult to replicate precisely, it is both intuitively and intellectually attractive and also relatively easy to explain to external audiences. Many of the most thoughtful and evocative examples of the art of regional integrated assessment in recent years can be included in this category, including the Kasperson, Kasperson, and Turner book on Regions at Risk [42] and the focus of the German Advisory Council on Global Change [43] on characteristic “syndromes” that represent the greatest threats to sustainability [44], where the relevant scale is defined by looking in detail at the functions of processes and mechanisms (see chapter 10). Seeking a multi-scale or meta-scale synthesis Alternatively, one can try to get results from asking the same question at each of several scales and, rather than iterating to convergence on a single answer, preserve the different answers and seek a higher level of understanding that derives insights from all the answers: e.g., to what degree are the answers different across scales? One example of this approach is the Susquehanna River Valley study which has been carried out over a number of years by a multidisciplinary research team at Pennsylvania State University. This team has analyzed such issues as the net cost of a national carbon tax at four different scales, finding strikingly different answers depending on the scale of attention [45]. Another example is a recent study of prospects for adaptation to global climate change in Australian agriculture, including attention to both farm-level decision-making patterns and national scale trends and structures [46]. Yet another example is work in progress at Oxford University which mixes bottom-up and top-down approaches in constructing vulnerability indicators [47]. In essence, this research accepts the results of analysis at each of several scales as all being aspects of a larger truth and looks for broader understandings that embrace and aid in understanding the variety of single-scale answers, usually seeking these understandings through qualitative judgments by assessment experts and, in some cases, stakeholders. Two examples of methodological conceptions that illustrate this perspective are “strategic cyclical scaling” and “hierarchical patch dynamics” (strategic cyclical scaling [48] will be discussed in greater detail in chapter 9 in this volume). Very briefly, it proposes a continuing cycling between upscaling and downscaling approaches, with each affecting the design of the other; and the early tests of this paradigm have been encouraging.
SCALING IN INTEGRATED ASSESSMENT 21
Hierarchical patch dynamics emerged from several decades of attention to pattern-process relationships in ecology, stimulated by Watt [49]. This approach proceeds from a conception of large-scale ecologies as nested hierarchies of patch mosaics, with overall ecosystem dynamics related to patch changes in time and space but moderated by metastability at a larger scales, not necessarily destabilized by the transient dynamics often characterizing local phenomena [50]. Patch dynamics are normally modeled by analyzing pattern-process relationships at several (or all) levels in the hierarchy, then examining how the findings at the different levels relate to each other (e.g., a trend from less stability at more local scales to more stability at larger scales or a relationship between scale and the speed at which component subsystems operate). In some cases, dynamic simulation modeling is employed to explore such issues as the “incorporation” of instability among hierarchical levels. Comparing the two alternatives Obviously, neither approach is clearly preferable for every conceivable purpose. Incorporating multiple scales is intellectually satisfying and may in some cases pick up inter-scale differences and interactions missed by regional synthesis, and this approach seems more promising for systems in which some scenarios converge on steady state A while others converge on B or C (Schneider, personal communication). A focus on a single region, however, when that region has some intrinsic meaning in terms of the empirical scale of a key concern or the ability to make decisions and take actions, can enable a firm grounding in reality. The main determinants are likely to be utility, operational feasibility, and the purposes of the assessment. In several important respects, the two alternatives are in fact similar. Both require some upscaling of more localized data and some downscaling of data and/or forecasts from global and other very large scales. Both are shaped by understandings derived from the general scientific literature (a kind of downscaling in which most upscaling is embedded: Root and Schneider [48]); and both are cognizant of relatively high-visibility findings from localized experience, including the investigators’ own life experiences. In addition, neither alternative necessarily addresses cross-scale dynamics, although neither excludes them. In terms of philosophical orientation, the practice of meta-scale synthesis appears to be more directly concerned with this dimension of integrated assessment. The principal challenges The challenges faced in operationalizing these approaches as ways to incorporate scaling into integrated assessment modeling range from conceptual to technical and data-based. As a very broad generalization, it can be suggested that the most fundamental challenges to regional synthesis are data-based, while the most fundamental challenges to meta-scale synthesis are conceptual.
22 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
Regional synthesis Obviously, operating at a single mesoscale requires some combination of upscaling, downscaling, and integration. One of the richest bodies of research experience in meeting this challenge is Geographic Information System (GIS) research (e.g., Quattrochi and Goodchild [51], NCGIA [52], Turner et al. [53], Turner [54] and, for relationships with simulation modeling: Wilson and Burrough [55]), although the preoccupation of GIS research with integrating spatial patterns is an imperfect fit with the needs of integrated assessment. For instance, the very substantial GIS problem of upscaling line patterns such as rivers and highways for display at a much more general scale is not the sort of thing that worries an integrated assessment modeler. On the other hand, the challenges associated with converting data for areal units into different spatial metrics are similar, and the abler GIS practitioners share a strong interest in process understanding in order to assure that the tool is both substantively valid and socially useful. Another substantial body of experience, of course, is landscape ecology (e.g., Rastetter et al. [56], Turner et al. [57]). Upscaling Many kinds of data pertinent to macroscale issues are gathered at specific points or in small areas, ranging from meteorological observations to crop production to soil samples. In addition, if future research support follows recent conclusions that integrated assessments of complex issues should be placeoriented, often implying a small-regional scale, then building larger-scale understandings from a growing portfolio of more or less localized case studies is an upscaling challenge that will be growing. Essentially, upscaling is an aggregation challenge, and a very serious technical challenge indeed [58, 59] (see also Curran et al. [60], Butterfield et al. [61]; Smith et al. [62]). In many cases, data cannot simply be aggregated to estimate larger-scale values, such as regional agricultural production or climate processes. For instance, the data may fail to meet standards for valid sampling, or they may fail to represent stochastic and geographic variability in representing how processes work. As one example, it has been shown that an estimated response to an “average” environment can be a biased predictor of a “true” aggregate response [63]. Or aggregate totals may lose information about variability that is instructive, or the value of the aggregate may be undermined by the fact that processes operate differently at different scales (i.e., “local” is not necessarily micro-global). The challenge is especially complicated when larger-scale characterizations are being constructed from incomplete local evidence: e.g., from a small number of at least somewhat idiosyncratic case studies (regardless how sound they may be). One such problematic situation is an effort to aggregate estimates of the net economic cost of climate change impacts on small areas in order to arrive at a total global or continental net cost, which has been a
SCALING IN INTEGRATED ASSESSMENT 23
subject of discussion in producing the Third Assessment Report of IPCC Working Group II (Impacts, Adaptation, and Vulnerability) [64]. A number of technical alternatives for dealing with statistical problems in upscaling have been outlined by Harvey [58], including distributed point process modeling, parameterization of patch interactions, linking mechanistic models between scales, changing model resolution, and creating new models. Another approach is regional calibration: comparing aggregates of individual records with regional records. Rastetter et al. [56] identify four methods: partial transformations using a statistical expectations operator, moment expansions, partitioning based on spatial autocorrelation, and regional calibration (regarding the use of interpolation to fill gaps in upscaling, see chapter 5). Downscaling Downscaling is equally essential as an aspect of integrated assessment, because so many critical driving forces – e.g., global climate dynamics, global population growth, global economic restructuring, and global technology portfolios – operate at very large scales but shape local realities and choices. In this connection, it goes without saying that the integrated research community recognizes the limitations of top-down paradigms based on global or near-global scale modeling alone [43]. Modelers are moving toward more detailed geographic scales and topical richness, using both numerical (i.e., model-based) and empirical (i.e., data-based) approaches (for one review, see Bass and Brook [65]; for an example of the current state of the art, see Easterling et al. [66]). Challenges include limited data availability at detailed scales (at least without expensive new data-gathering), the increasing complexity of causal relationships as models become more like the real world, challenges of capturing contextual detail to approximate local reality more closely (e.g., incorporating terrain in climate change modeling), and in some cases computational capacity (although advances in computing have reduced this constraint considerably). An example of the breadth of the current downscaling research enterprise is the range of approaches being applied experimentally in Pennsylvania State University’s Susquehanna River basin study (Yarnal, personal com-munication). Four approaches have been used to date. In one approach, Jenkins and Barron [67] embedded a regional climate model in a global circulation model, showing significant improvements over precipitation projections from the GCM alone. A second approach linked a nested version of a mesoscale meteorological model to a hydrological model system, which simulates basin hydrology at relatively detailed scale [68]. A third approach employed artificial neural network analysis for empirical climate downscaling to investigate cross-scale relationships between large-scale circulation and humidity fields and local precipitation [69]. A fourth approach used more traditional synoptic climatological analysis to relate atmospheric dynamics to various scales of basin runoff, showing different characteristic responses for different basin scales [70].
24 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
One of the forces encouraging downscaling is an interest in fostering public participation in discussions of issues being addressed by integrated assessment [71]. A frequent discovery in such a process is that, as presently constructed, models do not always produce answers to the questions being asked. This has led to consideration of “inverse” approaches to assessment modeling, beginning with relevant bottom-up questions and working back toward appropriate modeling structures. In this sense, listening to local concerns can help to catalyze a rethinking of how integrated assessment modeling is done. Integration Once all relevant data are converted to a common metric, or an algorithm for iterative convergence is specified, the integration challenge has been greatly simplified – but in some cases perhaps misleadingly so. If the aim is to attain an integrated understanding of processes, simply converting numbers to a common spatial scale does not necessarily assure conceptual integration, as contrasted with computational integration. In most cases, full integration also involves attention to interactions between processes that operate at different scales, because processes and controls that shape them, and appropriate representations of them, may not be scale-invariant. It often involves bridging between analytical styles, e.g., top-down models and bottom-up case studies, in order to understand the meaning behind the different perspectives imbedded in the source data. In addition, it is often a matter of reconciling differences in process assumptions, theoretical foundations, and perceived standards as to what constitutes the best science, sometimes rooted in different disciplinary traditions. Further, because different interacting processes may operate at different scales (e.g., between the scale of ecosystems and the scale of governmental units making decisions about them), efforts to incorporate a variety of linkages in a single analysis or action often must confront problems of “scale mismatch” [72, 28]. Among the avenues being investigated are “adaptive” approaches to analysis and assessment, which permit modifications of the scale as more is learned about the relevant processes and their interactions. What we know is that how integration is done can affect its outcomes. It is at least arguable that in many cases integration incorporates certain values of the modeler – often implicitly, sometimes without the modeler’s full awareness – and that this undermines the supposed objectivity of the process [73]. We also know that the scale at which integration is performed and results are reported can affect uses of the work (see chapter 3). Addressing cross-scale dynamics A profound general problem, because so much data collection and analysis occurs at a particular scale, large or small, is that data (and understandings)
SCALING IN INTEGRATED ASSESSMENT 25
are often scarce about cross-scale relationships and interactions. The challenge is to simultaneously capture driving and constraining forces at multiple scales and how they relate to each other [74]. Aside from system dynamics types of approaches, the most common strategy is to call upon hierarchy theory [75, 76, 77], which assumes that interactions between the dynamics of processes and structures at different scales shape systems at any one scale and that, therefore, hierarchies of scale-related processes define “constraint envelopes” within which systems can operate. Hierarchical perspectives can, however, be applied without necessarily relying on hierarchy theory. An example of a formal statistical approach related to this perspective, concerned with multiscale statistical inference, begins with a set of hierarchically defined partitions and then combines “data likelihoods” at each scale with a Bayesian prior probability structure [78]. One problem, of course, is that cross-scale dynamics may not always fit neatly into hierarchical structures. Still another possible source of ideas – at least for cross-scale pattern dynamics – is the literature on fractal structures [79], which suggests a predictable relationship between the scale of measurement and the measured phenomenon. Whether such relationships might also hold for non-pattern aspects of chaotic system dynamics is not so clear. A positive recent step has been efforts (e.g., by the U.S. National Science Foundation) to establish richer information infrastructures, especially regarding longitudinal data sets, although – as we all know – assuring continued financial support for really long-term data collection structures is a continuing challenge. Meta-scale synthesis On one level, it is not too difficult to outline a general approach for utilizing perspectives from multiple scales. Elements of a general strategy would include definition of a question to be answered (e.g., how much may climate change impacts be reduced by autonomous adaptation, or how much is biodiversity likely to be reduced by global climate change); conversion of the questions into operational definitions amenable to quantitative measurement and analysis, consistent with available data; selection of two or more scales, ideally perhaps three or four related to relevant conceptions of hierarchical levels in processes of interest [80], a compromise between an intellectual interest in all levels and resource limitations; calculation of an answer to the question at each of the scales; displaying the set of answers in a format that provides insights about patterns and/or relationships; and derivation of findings from that display, maybe using more or less standardized conventions. This strategy, however, only fills only a part of the need. It has the clear potential to illuminate differences among scales in functional relationships important in understanding global change, avoiding tunnel vision from looking at such relationships at only one scale. It may not, however, necessarily
26 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
illuminate interactions between scales. Addressing cross-scale dynamics in integrated assessment seems to call for one of four approaches (or a combination of them): (a) the kind of cyclical analysis proposed by strategic cyclical scaling, with each scale iteration including specific attention to interactions with other scales, (b) use of a methodology that delivers simultaneous solutions of equations representing within-scale and cross-scale relationships, such as system dynamics approaches, (c) finding and adopting a conceptual/theoretical construct that relates the macroscale and microscale processes, such as the patch-dynamics hypothesis and/or hierarchy theory, or (d) stepping out beyond the formal integrated assessment activity to access macroscale-microscale interaction understandings from relevant literatures and/or expert judgments, inserting the resulting understandings as additional model specifications, parameters, variables, or uncertainties. One example of a possible integrative approach is outlined conceptually in Figure 2.7; note the challenge of integrating both multiscale and cross-scale understandings in a single computational system.
Figure 2.7: Oak Ridge approach for scale integration.
For these and less ambitious meta-scale integration goals, three central challenges are worth considering: Data availability GCLP found that many questions being addressed by research protocols at global, national, or large regional scales cannot be pursued readily at more
SCALING IN INTEGRATED ASSESSMENT 27
local scales because of a lack of availability of data at those detailed scales. For climate change studies, the most familiar example is climate change forecasts at local scales, e.g., for a major city in a developing country. But the data gap is even more critical regarding impacts of climate change at a local scale, and it is still more problematic regarding local capacities to cope, adapt, and otherwise respond to risks or realities of impacts. Meanwhile, dealing with relatively localized scales by generalizing from a few detailed case studies is also problematic. This suggests that, for many purposes, balanced multi-scale or meta-scale synthesis is fundamentally undermined by data limitations at local scales. Creating formal quantitative structures that synthesize Incorporating meta-scale synthesis into integrated assessment modeling would ideally include a model component that provides an artificial-intelligence equivalent of human integrative judgment: not only combining numbers and applying mechanical algorithms but applying some form of synthesizing “reasoning.” This appears to be a laudable goal for the long term, but the fact remains that handling synthesis formally within the structure of an integrated assessment model requires quantitative structures appropriate to the questions being asked. A pertinent question at this stage is the degree to which this process can be generalized in integrated assessment models vs. being tailored for each question, represented in the overall modeling formulation as an exogenous input to be specified on a case-by-case basis. The current state of the art seems to require identifying a manageable number of key relationships for each case. Formalizing processes for combining quantitative and qualitative analysis One of the frontiers of integrated assessment, many practitioners believe, is transcending a boundary between quantitative analysis and non-quantitative components of an assessment process. Particular challenges range from incorporating expert judgment to incorporating narrative “stories,” scenarios, and analogs along with stakeholder knowledge bases [81]. One intriguing possibility is formal qualitative modeling, where broad insights do not depend on the precise shape of curves [43]. Another direction of interest is incorporating fuzzy logic in simulation modeling [55].
Directions for Improving our Capabilities If we want to improve our ability to meet these challenges, probably not by selecting a single operational approach for all purposes but by enhancing a variety of kinds of approaches as we learn further lessons from experience with integrated assessment modeling, what are the most important cross-cutting directions for research – and for research funding? In general, it appears that the easiest pieces of the puzzle to create are the computational ones, although
28 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
such issues as the representation of uncertainty and nonlinearities continue to be challenges that transcend scale-related questions alone. Rather than being limited by computational capabilities or the ability to model processes and relationships once we know them and have data about them, the state of the art is first of all profoundly data-limited [82, 58]. It is also still conceptually limited, not so much in representing more than one scale but in representing interscale processes and interactions. Given these realities, I would suggest the following directions as sort of a skeleton for a multidisciplinary, multi-institutional, multinational research agenda to improve our capabilities for addressing scale and scaling issues in integrated assessment: ■
■
■
■
■
Increase the availability of local or small-regional scale data, related to key issues and indicators. While this is obviously a resource-intensive process, it is an essential building block. One possible direction for exploration may be an expanded use of instrumentation for routine data-gathering; but it is important first to determine key indicators in order to assure that our modeling will be indicator-driven rather than merely data-driven. Improve longitudinal databases related to complex nature-society interactions and multiple stresses. A related need is to increase our knowledge base about interconnected phenomena and processes that crossdisciplinary boundaries, especially between nature and society, and to imbed our expanding understandings in comprehensive databases maintained over long periods of time. Identify key macroscale-microscale interaction issues and improve understanding of those key interactions. Along lines barely hinted at by Figure 2.4, we need to strengthen both theoretical and empirical understandings of the major components of cross-scale dynamics in global change processes, in order to determine how best to build this dimension into integrated assessment models. Explore tools for dynamic modeling of complex systems that are not now widely used in integrated assessment modeling. Most of our current modeling structures were built to understand process interactions at a very large scale. It is possible that structures intended to illuminate complex multi-scale system dynamics will need to use different tools, from system dynamics (or other approaches for deriving simultaneous solutions) to such alternatives as fuzzy logic, dynamic spatial simulation modeling, and applications of the science of complexity. Improve understandings of how to link analysis, assessment, deliberation, and stakeholder interaction. Finally, it seems clear that some aspects of the integrated assessment effort – especially related to upscaling from smallregional case study experiences, incorporating uncertainties in scenario construction, and involving stakeholders as experts in their own domains – will call for new paradigms for relating quantitative and non-quantitative contributions to our enterprise.
SCALING IN INTEGRATED ASSESSMENT 29
References 1.
2. 3.
4. 5. 6. 7. 8.
9. 10.
11. 12.
13.
14.
15.
Kates, R. W., T. J. Wilbanks, and R. Abler (eds.): Global Change in Local Places: Estimating, Understanding, and Reducing Greenhouse Gases. Cambridge: Cambridge University Press. Gibson, C., E. Ostrom, and T-K. Ahn, 1998. “Scaling Issues in the Social Sciences.” IHDP Working Paper no. 1. Bonn: IHDP. Gibson, C. C., E. Ostrom, T-K. and Ahn, 2000. “The Concept of Scale and the Human Dimensions of Global Change: A Survey.” Ecological Economics 32: 217–39. Klemes, V., 1983. “Conceptualization and Scale in Hydrology.” Journal of Hydrology 65: 1–23. Holling, C. S., 1992. “Cross-scale Morphology, Geometry, and Dynamics of Ecosystems.” Ecological Monographs 62: 447–502. Harvey, D., 1989. The Growth of Postmodernity. Baltimore: Johns Hopkins. Smith, N.: Homeless/Global: Scaling Places. In: J. Bird et al. (eds.). Mapping the Futures: Local Cultures, Global Change. Towards a Comprehensive Geographical Perspective on Urban Sustainability, 1999. Final Report of 1998 National Academy of Science Workshop on Urban Sustainability, New Brunswick: Rutgers – The State University of New Jersey, Center for Urban Policy Research Wilbanks, T. J., and R. W. Kates, 1999. “Global Change in Local Places.” Climatic Change 43: 601–628. Alexander, J. C., and B. Giesen, 1987. “From Reduction to Linkage: the Long View of the Micro-Macro Debate.” in: J. C. Alexander et al. (eds.). The Micro-Macro Link, Berkeley: University of California: 1–42. Clark, W. C., 1985. “Scales of Climate Impacts.” Climatic Change 7: 5–27. Turner, B. L. II, R. E. Kasperson, W. B. Meyer, K. M. Kow, D. Golding, J. X. Kasperson, R. C. Mitchell, and S. J. Ratick, 1990. “Two Types of Global Environmental Change: Definitional and Spatial Scale Issues in Their Human Dimensions.” Global Environmental Change 1: 14–22. Holling, C. S., 1995b. “What Barriers? What Bridges?” In: L. H. Gunderson, C. S. Holling, and S. S. Light (eds.). Barriers and Bridges to the Renewal of Ecosystems and Institutions. New York: Columbia University: 3–34. Openshaw, S., and P. J. Taylor, 1979. “A Million or So Correlation Coefficients: Three Experiments on the Modifiable Areal Unit Problem.” in: N. Wrigley (ed.). Statistical Applications in Spatial Science, London: Pion: 127–144. USGCRP, 2000. Climate Change and America: The Potential Consequences of Climate Variability and Change for the United States. U. S. Global Change Research Program, Washington, DC.
30 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
16. NAS/NRC, 1999. Our Common Journey: A Transition Toward Sustainability. Board on Sustainable Development, National Research Council. Washington: National Academy Press. 17. Clark, W. C., and N. M. Dickson, 1999. “The Global Environmental Assessment Project: Learning from Efforts to Link Science and Policy in an Interdependent World.” Acclimations 8: 6–7. 18. Redman, C., J. M. Grove, and L. Kuby, 2000. “Toward a Unified Understanding of Human Ecosystems: Integrating Social Science into Long-Term Ecological Research.” Working Paper, Long Term Ecological Research (LTER) Network. 19. Corell, R., 2000. Prepared remarks at a symposium in his honor: “A Decade of Global Change Research: Accomplishments and Future Directions of Earth System Sciences.” Fairfax, VA, May 15. 20. Environment Canada, 1997. The Canada Country Study: Climate Impacts and Adaptation. Adaptation and Impacts Research Group, Downsview, Ontario. 21. Cash, D. W., and S. C. Moser, 1998. “Cross-scale Interactions in Assessments, Information Systems, and Decision-making.” In: A Critical Evaluation of Global Environmental Assessments. Global Environmental Assessment Project. Cambridge: Harvard University. 22. Gallegher, R., and T. Appenzeller, 1999. “Beyond Reductionism.” Science 284: 79. 23. Rosswell, T., R. G. Woodmansee, and P. G. Risser, 1988. Scales and Global Change, New York: Wiley. 24. Joao, E., 2000. “The Importance of Scale Issues in Environmental Impact Assessment and the Need for Scale Guidelines.” Research Papers in Environmental and Spatial Analysis, No. 62. Department of Geography and Environment, London School of Economics. 25. Malanson, G. P., 1999. “Considering Complexity.” Annals of the Association of American Geographers 89: 746–753. 26. Jarvis, P. G., 1993. “Prospects for Bottom-Up Models.” in: J. R. Ehleringer and C. B. Field, (eds.). Scaling Physiological Processes: Leaf to Globe, New York: Academic Press: 117–126. 27. Phillips, J. D., 1999. Earth Surface Systems: Order, Complexity, and Scale. Oxford: Blackwell. 28. Cash, D. W., and S. C. Moser. 2000. “Linking Global and Local Scales: Designing Dynamic Assessment and Management Processes.” Global Environmental Change 2: 109–120. 29. Kates, R. W., M. Mayfield, R. Torrie, and B. Witcher, 1998. “Methods for Estimating Greenhouse Gases from Local Places.” Local Environment 3: 279–298. 30. U.S. Department of Energy, 1997. Technology Opportunities to Reduce U.S. Greenhouse Gas Emissions, Appendix B: Technology Pathways, Washington, DC: DOE. 31. Wilbanks, T. J., P. Leiby, R. Perlack, T. Ensminger, S. Hadley, and S. Wright, 2002. Tools for an Integrated Analysis of Mitigation and
SCALING IN INTEGRATED ASSESSMENT 31
32.
33.
34.
35.
36. 37.
38.
39. 40. 41.
42. 43. 44. 45.
46.
Adaptation as Responses to Concerns about Impacts of Global Climate Change. ORNL Report, Oak Ridge National Laboratory. Rotmans, J., 1998. “Global Change and Sustainable Development: Towards an Integrated Conceptual Model.” In: Schellnhuber, H-J., and V. Wenzel (eds.), 1998. Earth System Science: Integrating Science for Sustainability. Berlin: Springer-Verlag: 421–453. Bauer, B. O., J. A. Winkler, and T. T. Veblen, 1999. “Methodological Diversity, Normative Fashions, and Metaphysical Unity in Physical Geography.” Annals of the Association of American Geographers 89: 771–778. Easterling, W. E., 1997. “Why Regional Studies Are Needed in the Development of Full-scale Integrated Assessment Modelling of Global Change Processes.” Global Environmental Change 7: 337–356. Easterling, W. E., A. Weiss, C. Hays and L. O. Mearns, 1998. “Spatial Scales of Climate Information for Simulating Wheat and Maize Productivity.” Agricultural and Forest Meteorology 90: 51–63. Costanza, R., and T. Maxwell, 1994. “Resolution and Predictability: An Approach to the Scaling Problem.” Landscape Ecology 9: 47–57. Mearns, L. O., 2000. “The Issue of Spatial Scale in Integrated Assessments: An Example of Agriculture in the Southeastern U.S.” paper presented at the annual meeting of the Association of American Geographers, Pittsburgh, April. Mearns, L. O., T. Mavromatis, E. Tsvetsinskaya, C. Hays, and W. Easterling, 1999. “Comparative Response of EPIC and CERES Crop Models to High and Low Spatial Resolution Climate Change Scenarios.” Journal of Geophysical Research 104: 6623–6646. Grubler, A., N. Nakicenovic, and A. MacDonald, 1998. Global Energy Perspectives. Cambridge: Cambridge University Press. Wene, C. O., 1996. “Energy-economy Analysis: Linking the Macroeconomic and Systems Engineering Approaches.” Energy 21: 809–824. Hirschboeck, K. K., 1999. “A Room with a View: Some Geographic Perspective” Annals of the Association of American Geographers 89: 696–706. Kasperson, R., J. Kasperson, and B. L. Turner, 1995. Regions at Risk. Tokyo: United Nations University. Schellnhuber, H-J., and V. Wenzel (eds.), 1998. Earth System Science: Integrating Science for Sustainability. Berlin: Springer-Verlag. WBGU (German Advisory Council on Global Change), 1997. World in Transition: The Research Challenge. Berlin: Springer-Verlag. Abler, D. G., J. Shortle, A. Rose, and G. Oladosu, 2000. “Characterizing Regional Economic Impacts and Responses to Climate Change.” Global and Planetary Change 25: 67–81. Risbey, J., M Kandlikar, H. Dowlatabadi, and D. Graetz, 1999. “Scale, Context, and Decision Making in Agricultural Adaptation to Climate Variability and Change.” Mitigation and Adaptation Strategies for Global Change 4: 137–165.
32 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
47. Downing, T. E., R. Butterfield, S. Cohen, S. Huq, R. Moss, A. Rahman, Y. Sokona, and L. Stephen, 2001. “Climate Change Vulnerability: Toward a Framework for Understanding Adaptability to Climate Change Impacts.” Environmental Change Institute, Oxford University. 48. Root, T. L., and S. H. Schneider, 1995. “Ecology and Climate: Research Strategies and Implications.” Science 269: 334–341. 49. Watt, A. S., 1947. “Pattern and Process in the Plant Community.” Journal of Ecology 35: 1–22. 50. Wu, J., and O. L. Loucks, 1995. “From Balance of Nature to Hierarchichal Patch Dynamics: A Paradigm Shift in Ecology.” Quarterly Review of Biology 70: 439–466. 51. Quattrochi, D. A., and M. F. Goodchild (eds.). Scale in Remote Sensing and GIS. Boca Raton: Lewis Publishers, 1997. 52. National Center for Geographic Information and Analysis, 1997. “Scale.” White Paper No. 6. Santa Barbara, CA: University of California. 53. Turner, M. G., R. V. O’Neill, and R. H. Gardner, 1989. “Effects of Changing Spatial Scale on the Analysis of Landscape Pattern.” Landscape Ecology 3: 153–162. 54. Turner, M. G., 1990. “Spatial and Temporal Analysis of Landscape Patterns.” Landscape Ecology 4: 21–30. 55. Wilson, J. P., and P. A. Burrough, 1999. “Dynamic Modeling, Geostatistics, and Fuzzy Classification.” Annals of the Association of American Geographers 89: 736–746. 56. Rastetter, E. B., A. W. King, B. J. Cosby, G. M. Hornberger, R. V. O’Neill, and J. E. Hobbie, 1992. “Aggregating Fine-scale Ecological Knowledge to Model Coarser-scale Attributes of Ecosystems.” Ecological Applications 2: 55–70. 57. Turner, M. G., R. Constanza, and F. H. Sklar, 1989. “Methods to Compare Spatial Patterns for Landscape Modeling and Analysis.” Ecological Modelling 48: 1–18. 58. Harvey, L. D., 1997. “Upscaling in Global Change Research.” Elements of Change 1997, Aspen Global Change Institute: 14–33. 59. Harvey, L. D., 2000. “Upscaling in Global Change Research.” Climatic Change 44: 225–263. 60. van Gardingen, P. R.,G. M. Foody, and P. J. Curran (eds.). Scaling Up: From Cell to Landscape, Cambridge: Cambridge University Press. 61. Butterfield, R. E., M. Bindi, R. F. Brooks, T. R. Carter, R. Delecolle, T. E. Downing, Z. Harnos, P. A. Harrison, A. Iglesias, J. E. Olesen, J. L. Orr, M. A. Semenov, and J. Wolf, 2000. “Review and Comparison of Scalingup Methods.” In: T. E. Downing et al. (eds.), 2000. Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Oxford: University of Oxford, Environmental Change Institute: 393–414.
SCALING IN INTEGRATED ASSESSMENT 33
62. Smith, T. M., H. H. Shugart, G. B. Bonan, and J. B. Smith, 1992. “Modeling the Potential Response of Vegetation to Global Climate Change.” In: F. I Woodward (ed.). Advances in Ecological Research: The Ecological Consequences of Global Climate Change. New York: Academic Press: 93–116. 63. Templeton, A. R., and L. R. Lawlor, 1981. “The Fallacy of the Averages in Ecological Optimization Theory.” American Naturalist 117: 390–391. 64. IPCC, 2001. Climate Change 2001: Impacts, Adaptation, and Vulnerability. Cambridge UK: Cambridge University Press. 65. Bass, B., and J. R. Brook, 1997. “Downscaling Procedures as a Tool for Integration of Multiple Air Issues.” Environmental Monitoring and Assessment 46: 152–174. 66. Easterling, W. E., L. O. Mearns, C. Hays, and D. Marx, 2001. “Comparison of Agricultural Impacts of Climate Change Calculated from High and Low Resolution Climate Change Scenarios. Part II: Accounting for Adaptation and CO2 Direct Effects.” Climatic Change 51: 173–197. 67. Jenkins, G. S., and E. J. Barron, 1997. “Global Climate Model and Coupled Regional Climate Model Simulations over the Eastern United States.” Global and Planetary Change 15: 3–32. 68. Yarnal, B., M. N. Lakhtakia, Z. Lu, R. A. White, D. Pollard, D. A. Miller, and W. M. Lapenta, 2000. “A Linked Meteorological and Hydrological Model System: The Susquehanna River Basin Experiment.” Global and Planetary Change 25: 149–161. 69. Crane, R. G., and B. C. Hewitson, 1998. “Doubled CO2 Precipitation Changes for the Susquehanna Basin: Down-scaling from the GENESIS General Circulation Model.” International Journal of Climatology 18: 65–76. 70. Yarnal, B., and B. Frakes, 1997. “Using Synoptic Climatology to Define Representative Discharge Events.” International Journal of Climatology 17: 323–341. 71. Kasemir, B., D. Schibli, S. Stoll, and C. Jaeger, 2000. “Involving the Public in Climate and Energy Decisions.” Environment 42: 32–42. 72. Wilbanks, T. J., 1994. “Sustainable Development in Geographic Context.” Annals of the Association of American Geographers 84: 541–57. 73. Schneider, S. H., 1997. “Integrated Assessment Modeling of Global Climate Change: Transparent Rational Tool for Policy Making or Opaque Screen Hiding Value Laden Assumptions?” Environmental Modeling and Assessment 2: 229–249. 74. Holling, C. S., 1995a. “Sustainability: the Cross-scale Dimension.” In: M. Munasinghe and W. Shearer (eds.). Defining and Measuring Sustainability: The Biogephysical Foundations. Washington: United Nations University/World Bank: 65–75.
34 GEOGRAPHICAL SCALING ISSUES IN INTEGRATED ASSESSMENTS OF CLIMATE CHANGE
75. Pattee, H. H., (ed.), 1973. Hierarchy Theory: The Challenge of Complex Systems. New York: G. Braziller. 76. O’Neill, R. V., 1988. “Hierarchy Theory and Global Change.” In: T. Rosswall, R. G. Woodmansee, and P. G. Risser (eds.). Scales and Global Change. New York: John Wiley: 29–45. 77. O’Neill, R. V., A. R. Johson, and A. W. King, 1989. “A Hierarchical Framework for the Analysis of Scale.” Landscape Ecology 3: 193–205. 78. Kolaczyk, E. D., and H. Huang, 2001. “Multicale Statistical Models for Hierarchical Spatial Aggregation.” Geographical Analysis 33: 95–118. 79. Mandelbrot, B. B., 1977. Fractals: Form, Chance, and Dimension. San Francisco: W. H. Freeman. 80. Ahmad, A. U., M. Alam, and A. A. Rahman, 1999. “Adaptation to Climate Change in Bangladesh: Future Outlook.” In: S. Huq, Z. Karim, M. Asaduzzaman, and F. Mahtab (eds.). Vulnerability and Adaptation to Climate Change for Bangladesh. Dordrecht: Kluwer Academic. 81. Wilbanks, T. J., and R. Wilkinson, forthcoming. “Integrating Human and Natural Systems to Understand Climate Change Impacts on Cities.” Elements of Change 1999, Aspen Global Change Institute. 82. Hulme, M., E. M. Barrow, N. W. Arnell, P. A. Harrison, T. C. Johns, and T. E. Downing, 1999. “Relative Impacts of HumanInduced Climate Change and Natural Climate Variability.” Nature 397: 688–691.
3 Micro/Macro and Soft/Hard: Diverging and Converging Issues in the Physical and Social Sciences 1
2
NICO STEHR AND HANS VON STORCH 1 Kulturwissenschaftliches Institut, Essen, Germany 2 Director, Institute for Coast Research, GKSS Forschungszentrum, Geesthacht, Germany
Abstract The concept of scales is widely used in social, ecological and physical sciences, and is embedded in various ongoing philosophical debates about the nature of nature and the nature of society. The question is whether the difference between scales makes a difference and if so what difference. Multilevel approaches compete with reductionist approaches. We are tracing the highlights of the disputes as well as some of the resolutions that have been offered. Most importantly, debates about differences in scale are enmeshed in what should be distinguished, namely analytical knowledge-guiding interests and those that might be called practical knowledge-guiding interests. It is unlikely that purely analytical debates can be resolved. However, progress about the impact and relevance of scale can be achieved with respect to the practicalpolitical discursive level of knowledge claims. More specifically, scales are a crucial concept in determining the capacity for action from knowledge about the dynamics and structures of processes. For instance, in the context of climate change, knowledge claims about global and continental processes are relevant for the international political process aimed at abatement measures, whereas knowledge about regional and local effects controls decisions concerning adaptation measures. Acknowledgements We appreciate the comments and suggestions received on an earlier version of the paper by Carlo C. Jaeger, Jan Rotmans and Dale Rothman although it has not been possible for all of those to be absorbed in the present version.
36 MICRO/MACRO AND SOFT/HARD
Introduction and Overview Climate scientists share a greater common understanding of the scientific usefulness of scales1 than do social scientists2. This greater agreement among climate scientists does not necessarily enhance the practicality of the knowledge claims about the dynamics of the climate system. Social scientists have debated the relevance of different scales for a long time, and though the arguments have been rehashed and repeated many time, they have rarely led to new insights. Conflicts gave way to a search for linkages between micro and macro levels of analysis and the failure to agree on linkages reanimated conflicts (cf. Alexander and Giesen [3]). The disputes remain unresolved. We will try to reframe the issue rather than repeat claims that are invariably contested. For the purpose of further reflection, the main points we want to develop in the process of reframing the debate on scaling is that scales – or the difference between micro and micro, as many social scientists would say – are relevant not just as an analytical problem (that is, as a problem of scientific description or explanation) but as a practical problem. The disputes about scale have rarely been treated as a topic that ought to distinguish between knowledge-guiding interests that are concerned, on the one hand, with the practicality of the knowledge generated by science and, on the other hand, with optimizing certain theoretical and methodological conceptions in the process of generating knowledge claims (see Gibson et al. [2: p14]). The practicality of knowledge generated by science refers to the usefulness knowledge may have as a “capacity for action” in practical circumstances and for particular actors. Analytical attributes of knowledge refer to methodological and theoretical attributes of knowledge claims, for example, the extent to which 1
For instance, in climate science, a reference to a continental scale means that only quantities averaged over a continent are considered, whereas a scale of 1 km means that variations taking place on distances much shorter or much longer than 1 km are not regarded. Similarly, a time scale of 100 and more years mean that time variations extending over intervals of less than 100 years are disregarded. The concept of scales, and the art of “filtering” dynamical equations so that they become simpler and valid to a limited range of spatial and temporal scales, is worked out formally in textbooks on geophysical fluid (atmosphere, ocean) dynamics (see for instance Pedlosky [1]). 2 We will refrain from extensive discussions about the terminology used in social sciences; instead, we adhere to the difference between macro and micro in the social sciences. This difference does not only (or even mainly) refer to allegedly “precise” operations and conceptions along readily quantifiable (flat or hierarchical) dimensions and therefore only time and location. It would be a mistake to conflate these two approaches, as is the case in mundane reasoning. Such a conflation occurs in a report by Gibson et al. [2: p11] where small scale “refers to phenomena that are small in regard to scales of space, time, or quantity” and large scale “refers to big items, quantities, or space.”
SCALING IN INTEGRATED ASSESSMENT 37
propositions developed for one level can be generalized to another level or the extent to which they can be formalized. The practicality of knowledge claims, in contrast, aims to assist actors, confronted with specific conditions of action, to set something into motion and do so, of course, with the aid of knowledge. We maintain that there is not a linear relation or obvious congruence between enhancing the analytical and practical capacity of knowledge. Two examples may illustrate the point. (1) The determination that the “growing division of labor in society explains the rising divorce rates in advanced society” constitutes a prominent and eminent social science explanation. However, a nation, a region, a city, a village, or a neighborhood will hardly be able to “manipulate” the division of labor and therefore “arrest” (in the sense of effect) divorce rates within its boundaries. (2) The insight that the equilibrium global temperature of Earth would rise by, say, 2 degrees Celsius if carbon dioxide concentrations in the atmosphere double does not provide people at the regional and local level with the capacity to react skillfully, as this insight on the global scale provides no assessment for ongoing environmental change on a regional or local scale within the foreseeable future. Knowledge-guiding interests that aim to enhance the practicality of knowledge claims and knowledge claims that live up to specific analytical attributes (such as logic, truthfulness, reality-congruence, etc.) are not mutually exclusive; however, they do not necessarily lead to identical knowledge claims. The distinction between analytical and practical is particularly relevant to actors who have to deal with and convert scientific knowledge claims into practical action. Thus, choices of scale not only affect what can or will be analyzed but also what can or will be done. But first, we need to restate and summarize the social and the physical science debate about the role of scales in the analysis and the differences that are claimed on behalf of a differentiation with the help of scales. In the case of physical science, our description will focus on climate science. Scales in the Social Sciences: Mixing Levels or what is the Difference? In every living thing what we call the parts is so inseparable from the whole that the parts can only be understood in the whole, and we can neither make the parts the measure of the whole nor the whole the measure of the parts; and this is why living creatures, even the most restricted, have something about them that we cannot quite grasp and have to describe as infinite or partaking of infinity. Johann Wolfgang von Goethe (1785)
38 MICRO/MACRO AND SOFT/HARD
Goethe maintains that the understanding of parts or wholes requires the elimination of their difference. It appears that the social sciences have generally followed his advice, since a liberal mixing of levels3 or multilevel analysis is common in social science accounts. Even in approaches that are self-consciously micro or macro, linkages between levels are evident. If this is the case, then the difference between levels is unnecessary. The assertion whether a differentiation is helpful or not is based on a certain comprehension of the constitution of examined processes and therefore to specific knowledge-guiding interests internal to the scientific community. For example, the common theoretical link that sociologists obtain between the conduct of individual actors (micro level), situational factors, or the social structure typically are a particular social psychological theory (macro level). When Robert K. Merton [4] explains deviant behavior he does so not as the outcome of individual differences but as the consequence of the situation within which the actor is located. Merton argues that unattainable goals produce deviant behavior. Whether the actor in fact faces unattainable goals is determined by the situation or social structure. Situations vary, but the social psychology that links actor and situation (namely, trying to pursue legitimate goals) are the same for each individual. Hence the differences in location explain deviance. Without the social psychological premises, the account would be incomplete [5: p102–103].4 Put another way, the problem is that neither solitary perspective “pays adequate attention to the constructed nature of both individuals and groups” [9: p59]. Part and system form a whole. The mixture of different scales is argued to be constitutive for social phenomena. Paraphrasing Wittgenstein [10: p20], understanding parts of an ordinary language game requires the comprehension of a form of life or a cultural system.5 3
We use the term “level” mostly as synonymous with “scale”. However, when two different types of scales are considered, for instance, space and time, they are considered to be of the same “level” if they are found to co-exist. 4 The volatility of shifting positions, courting methodological individualism but not to the exclusion of holism (or vice versa) is also one of the characteristics of classical social theory, for example, in the work of Marx, Weber and Durkheim but also in the writings of classical contemporary social theory such as Parsons [6: p89–102, p177–196, p298–322] or in the assumptions that informed neo-classical economic discourse. By the same token, in advocating an institutionalist view, Meyer et al. [7: p13–14] do not postulate a society without people. However, they maintain that the individual is a social construction and that the linkage between institutional scripts enacted by individuals is the social psychological advocated by C. Wright Mills [8]. The institutionalist perspective corrects for excessive emphasis on the preeminent status of (individual) actors in modern economic, psychological and social theory characterized by individual socialization and internalized values. 5 Using more conventional sociological terminology, both “microscopic processes that constitute the web of interactions in society and the macroscopic frameworks that
SCALING IN INTEGRATED ASSESSMENT 39
As the label already indicates, the institutionalist perspective assigns explanatory priority to the macro scale: “Social processes and social change … result at least in part, from the actions and interactions among large-scale actors … Welfare systems, job markets, and cultural structures become products of organizations or sets of organizations” [7: p17]. Network analysis, rational choice theory, interaction ritual chain analysis [12] or Homans’ [13] behaviorism typically favor the micro scale. These strategies simply maintain and are linked to the theoretical premise that the realities of social structure reveal patterns of “repetitive micro-interaction” [12: p985]. What is relevant and constitutes the immediate environment for the analysis depends on prioritizing scales. Macro models – where their own internal divisions of levels are problematic – prefer resource or ecological dependency perspectives, while micro models that acknowledge the presence of levels emphasize cultural practices and conceptions as their most relevant environment. Approaches that readily acknowledge and freely mix different scales in their analysis place different emphasis on relevant scales, on how one progresses down or up the conceptual scale (aggregation, cumulation, interaction), and on how robust or recalcitrant different units of analysis happen to be. The strict limitation to certain scales, that is, the conviction that levels cannot be mixed, is based on considerations of methods or access to levels. As Scheff [14: p27–28] states in an exemplary fashion: The macroworld, “so vast and so slow moving, requires special techniques to make its regularities visible – the statistics and mathematical models now taken for granted. The study of the microworld also requires special techniques, but for the opposite reason: the movements are too small and quick to be readily observable to the unaided eye.”6 Our interpretation of the elevation of one level is one necessitated by perspective: The perspective of the observer as compared with the level of the observer. The debate about levels of analysis in the social sciences are not constrained or disciplined by commonly accepted definitions of the boundaries of disciplines and subdisciplines. However, the choice to work within the accepted confines of sub-atomic physics or cellular biology a priori limits the resolution of patterns that can legitimately be studied. Social scientists have not reconstructed the world of social phenomena in the same hierarchical fashion that is generally taken for granted in the physical sciences.
result from and condition those processes are essential levels for understanding and explaining social life” [3: p13, 11: p185]. 6 For specifically, as Scheff [14: p28] notes, “observing the microworld requires not a telescope, such as a sample survey, but a microscope – video- and audiotapes, or at least verbatim texts, which provide the data for discourse analysis.”
40 MICRO/MACRO AND SOFT/HARD
Scales in the Physical Sciences: the Climate System A characteristic of the physical climate system is the presence of processes on all spatial scales. The “scale” of a process is the extension of an area where the direct impact of the process is felt. Thus the spatial scale of the tropical trade wind system is several thousand kilometers; that of a cyclone at mid latitudes is about one thousand kilometers; a front, a few hundred kilometers; a thunderstorm, a few kilometers, and individual turbulent eddies in the atmospheric boundary layer exert an influence on scales of several meters and less (Fig. 3.1). A typical feature of this cascade of spatial scales is that it is associated with a similar cascade in temporal scales. Smaller scales exhibit shorter term variations, whereas larger scales vary on longer time scales. For instance, a cyclone with a diameter of a thousand kilometers exists for several days, whereas a thunderstorm of several kilometers diameter is dissipated after a few hours (Fig. 3.1). A similar analysis can be made for oceanic processes.
Figure 3.1: Scales in the atmospheric dynamics.
All of these processes interact. The trade wind system, as part of the Hadley Cell, helps to maintain a meridional temperature gradient at mid latitudes, so that the air flow becomes unstable and eddies form (namely, extratropical cyclones); these storms form fronts, and the strong winds blowing above the Earth surface create a turbulent boundary layer of several hundred meters
SCALING IN INTEGRATED ASSESSMENT 41
height. In this argument, large-scale features create environmental conditions so that smaller scale features emerge. This view is supported by an experiment with a complex climate model simulating atmospheric motion on an “aqua planet”, i.e., a globe without topography [15]. Initiated with a motionless state, driven by equator-to-pole gradients in the global ocean’s surface temperature and by solar radiation, the general circulation of the atmosphere just described emerges within a few weeks, with trade winds, extratropical storms, and turbulent boundary layers. Climate at a smaller scale appears as conditioned by the state at a larger scale [16]. However, the smaller scale is not determined by the larger scale, as demonstrated by the weather details, which may differ greatly in two very similar synoptic situations [17, 18]. But information about the conditioning large-scale state is incorporated in the statistics of small scale features. This fact is used in paleoclimatic reconstructions [19, 20], which are based entirely on “upscaling” of local information like tree ring widths or densities. Do the smaller scales affect the larger scales? They do: without the small scale eddies in the turbulent boundary layer, a cyclone would not lose its kinetic energy; without the extratropical storms, a much stronger equator-topole temperature gradient would appear and the Hadley Cell, with its trade wind system, would possibly extend to the polar regions. While the large scales condition the smaller scales, the smaller scales make the large scales more fuzzy. There is a simple intuitive arguments for this asymmetry: there are many realizations of the smaller scale process, encompassed in the area of influence of one larger scale process. The smaller scale processes represent a random sample of possible realizations, and their feedback on the large-scale process depends on the statistics of the smaller scales processes. The details of a single storm are not relevant, but the preferred area of formation, the track of the storms, and the mean intensity do influence the formation of the general atmospheric circulation. Aside from making the large scales more fuzzy, smaller scale short-term variations also cause the large-scale components to exhibit slow variations. This phenomenon, comparable with Brownian motion of macroscopic particles under the bombardment of infinitely many microscopic molecules, is demonstrated in the “stochastic climate model” of Hasselmann [21]. The short term variations are considered random, and the large-scale components integrate this random behavior. Whether the many small-scale features are really varying randomly is irrelevant; as long as these processes are strongly nonlinear, often a valid assumption, their joint effect can not be distinguished from randomly generated numbers. This effect is illustrated in Figure 3.2, showing the time evolution of a one-dimensional world characterized by a large-scale (global) temperature: solar (short-wave) radiation is intercepted by this world; part of this radiation
42 MICRO/MACRO AND SOFT/HARD
Figure 3.2: a) EBM without noise, b) with noise.
is reflected back to space; the intercepted radiation is re-emitted as thermal (long-wave) radiation proportional to the fourth power of temperature. When the proportion of reflected solar radiation (“albedo”) is such that a higher temperature is connected with lower reflectivity (less snow and ice) and lower temperature with higher reflectivity (more snow and ice), then Earth can have two different temperatures. Which of these temperatures is attained depends on where one starts (Fig. 3.2a). However, a different behavior emerges
SCALING IN INTEGRATED ASSESSMENT 43
when the reflectivity exhibits additional random variations, representing the variable small-scale cloud cover of Earth (Fig. 3.2b). The systems exhibits slow variations and intermittent jumps between the two preferred regimes of the system. Obviously, in this thought experiment, the small-scale, short-term variations (“noise”) are a constitutive element, causing the emergence of slow variations of large-scale temperature [22]. Time series of observed large-scale quantities, like the global mean near-surface temperature, show similar frequency behavior, even if the interesting regime shifts in Figure 3.2b are not obvious [23, 24].
There is Nothing as Practical as a Good Theory Our discussion of the macro/micro controversy in the social sciences and the accomplishments of scaling in climate science has shown that, despite their divergence, the focus in both cultures is on the analytical accomplishments. That is, scaling issues tend to be deliberated and judged in the sciences based on the internal knowledge-guiding interests. But this also implies that the scaling problem is discussed in a one-sided manner. Improvements in the analytical capacities of knowledge (or the scientificity of knowledge claims) do not always improve upon the practical efficacy of knowledge. The thesis that analytical improvements enhance the usefulness of knowledge is best captured in the maxim “there is nothing as practical as a good theory”. The emphasis clearly is on good theory, and what constitutes good theory is disputed more in the social than the physical sciences. An improvement of theory surely constitutes intellectual progress within science. But good theory does not invariably point to “elements” in a concrete situation that can be acted upon in order to accomplish a certain purpose, for example, in the sense of affecting development of a specific process – even though that process is better understood because of the good theory (and the scaling choices made in order to generate good theory). That good theory – and whatever good theory may mean in concrete terms – does not automatically mean practical knowledge can best be shown by defining knowledge as a capacity to act or as a model for reality (see Stehr [25]). Our choice of terms is inspired by Francis Bacon’s famous observation “scientia est potentia,” or, as it has often been somewhat misleadingly translated: “knowledge is power”. Bacon suggests that knowledge derives its utility from its capacity to set something in motion. The term “potentia”, or capacity, describes the power of knowing. Human knowledge represents the capacity to act, to set a process in motion, or to produce something.7 The 7
Knowledge, as a generalized capacity for action, acquires an “active” role in the course of social action only under circumstances where such action does not follow purely stereotypical patterns (Max Weber), or is not strictly regulated in some other
44 MICRO/MACRO AND SOFT/HARD
fashion. Knowledge assumes significance under conditions where social action is, for whatever reasons, based on a certain degree of freedom in the courses of action that can be chosen. Certain circumstances of the situation have to be actionable. Space does not allow us to examine all the implications of our thesis. However, this much needs to be added: the notion that constraints may be apprehended as open to action or, as more or less unalterable, should not be interpreted to mean that the apprehension of pertinent constraints of action is merely a subjective matter and an idiosyncratic component of social action. Evidently, it is not only the social definition of the nature of the situation that decides whether certain features of the context in question are fixed or not. Such a conception of situational components that are open to social action of course ignores what are often called “objective” constraints of human conduct, which facilitate social action or impose on it certain limits. Nonetheless, extraneous or structural constraints that may issue from given social contexts may be interpreted in terms of “sets of feasible options” open to individuals and groups ([26: p107]; emphasis added) because such structural constraints are ultimately the product of decisions of specific actors, though the ability of many to reproduce and effect such constraints is often severely restricted. But in the final analysis, the point is, whatever the objective constraints, they are not beyond the control of all actors. These considerations require that the consideration of features of specific social contexts as either relatively open or closed to social action should not be driven solely by a subjective definition of situational constraints, but should recognize, for example, that actors at times may be largely unaware of constraints that are “actionable” (cf. Merton [27: p173–176]). Individuals and groups may therefore need and be prepared to accept some form of enlightenment. This “critical” function could well be served by a practical social science that provides a cogent account of human agency as it is mediated by the specifics of certain social contexts. In this sense, the function of social science is to open up possibilities for social action that common sense, for example, strives to conceal or manages to close down (cf. Baumann [28: p16]). For a more detailed discussion of the various implications of our thesis, see Stehr [29]. Karl Mannheim [30] defines, in much the same sense, the range of social conduct generally, and therefore the contexts in which knowledge plays a role, as restricted to spheres of social life that have not been completely routinized and regulated. For, as he observes, “conduct, in the sense in which I use it, does not begin until we reach the area where rationalization has not yet penetrated, and where we are forced to make decisions in situations which have as yet not been subjected to regulation” [30: p102]. Concretely, “The action of a petty official who disposes of a file of documents in the prescribed manner or of a judge who finds that a case falls under the provisions of a certain paragraph in the law and disposes of it accordingly, or finally of a factory worker who produces a screw by following the prescribed technique, would not fall under our definition of ‘conduct.’ Nor for that matter would the action of a technician who, in achieving a given end, combined certain general laws of nature. All these modes of behaviour would be considered as merely ‘reproductive’ because they are executed in a rational framework, according to a definite prescription entailing no personal decision whatsoever” [30: p102].
SCALING IN INTEGRATED ASSESSMENT 45
success of human action can be gauged from changes that have taken place in reality or are perceived by society. The notion of knowledge as a capacity for social action has the advantage that it enables one to stress not just one dimension, but the rich, multifaceted consequences of knowledge for action. The realization of knowledge in political, everyday, economic, or business contexts is embedded in a web of social, legal, economic and political circumstances. That is, the definition of knowledge as a capacity for action strongly indicates that the realization of knowledge is dependent on specific social and intellectual contexts. Knowledge use and its practical efficacy is a function of “local” conditions and contexts. Scaling decisions can therefore be affected with respect to actionable circumstances and not merely attributes that suggest themselves because they happen to be desirable from an analytical perspective.
The Differences that Make a Difference: Scales in Climate Change and Climate Impact Research The scale problem outlined above relates to both a success and a major limitation of modern climate research in constructing plausible climate change scenarios. The computing technology available now and in the foreseeable future does not allow resolution of small-scale features in climate models. Instead, the small-scale features are not described in any detail but are parameterized, i.e., their effect on the resolved scales is described as a function of the resolved scales. In this way, the equations are closed, and the large-scale features are described realistically. The overall general circulation of the atmosphere is simulated as in the real world, extratropical storms are formed with the right life cycles and locations. Obviously, this success is not perfect and the next years will see significant improvements. Independently of the degree of success on scales of, say, 2000 km and more, global climate models fail to provide skillful assessments on scales of, say 100 and less kilometers. Therefore the contemporary discussion concentrates only on anthropogenic climate change detectable now on the global scale, and not on the regional and local scale. For political purposes, namely for emphasizing the need for abatement action of the worlds’ governments, these results valid for large scales are sufficient, as the details of expected change are less important than the perception of global risk. When we consider the alternative though not contradictory poltiical strategy to abatement measures, namely adaptation, we need regional and local assessment of anthropogenic climate change, since climate impacts people mainly on the regional scales. Regional scales as social constructs are highly variable. Storm surges happen regionally; the storm track may be shifted by a few hundred kilometers; when rain replaces snowfall, or snow melts early, a
46 MICRO/MACRO AND SOFT/HARD
catchment is affected, and so on. Such information may be derived by postprocessing the output of global climate models, by exploiting the above sketched links between the scales. For this purpose, climate scientists have designed dynamically or empirically constructed models describing the possible regional states consistent with large-scale states generated in global models. This approach is named “downscaling”, as information from larger scales is transferred to smaller scales. “Dynamical downscaling” uses models based on detailed dynamical models, or regional climate models; “empirical downscaling” operates with statistical models fitted to the observational evidence available from the recent history. While a large variety of “downscaling” techniques have been developed in the past decade, they have not yet provided climate impact research with the required robust estimates of plausible regional and local climate change scenarios, mainly because global climate models have not yet provided sufficiently converged consistent large-scale information to be processed through “downscaling” [31]. However, one might expect that this gap could be filled in within a few years, so that detailed regional and local impact studies may provide robust scenarios of changes in climatic variables like temperature, storminess and sea level. This information also has to be postprocessed further with dynamical and empirical models of climate sensitive systems, like the water balance in a catchment, the ecology of a forest, the statistics of waves on marginal seas, or the economy of agriculture. Of course, in many cases, this postprocessing is futile if other factors are considered in parallel to changing climatic conditions, such as changing social preferences, technological progress and the like. These models again suffer from scale problems. Almost all environmental modeling efforts assume that the system may separated into two subsystems, one that is explicitly described and another that is considered noise, which influences the explicitly described part statistically. The explicitly described “dynamical” part is considered to carry the essential dynamics. In climate and other physical systems, the dynamical subsystem comprises all large-scale processes while the noise subsystem comprises the small scale processes. Thus, the former contains relatively few processes and the latter, infinitely many. This convenient separation according to scales can no longer be adopted in other systems, such as ecosystems or economies.
Conclusions In the physical sciences, discussions of scale revolve around time and place. In the social sciences, discussions of micro/macro tend to concentrate on functional relationships. The concepts of macro vs. micro and of scales in the social and in the physical science are widely used, but not without problems (see Connolly [32: p10–44]). The question is whether the difference between
SCALING IN INTEGRATED ASSESSMENT 47
scales makes a difference, and if the scales matter, what difference they make. Not surprisingly, the intensity of the dispute varies by discursive field. In the physical sciences, in this case, climate science, the debate is less intense and manifests itself in more definitive knowledge claims about the impact of differences in scale. Well-intentioned scientists focus on the analytical qualities of the knowledge claims they generate, largely because they see it as the solution to the question of “what is to be done”, without looking at how effective and practical these accounts are going to be. This can be judged to be a form of escape from scientific labor. Effectiveness and practicality are governed by prevailing social conditions. The ability to transform prevailing contexts requires, first, an examination and identification of those contextual elements that can be altered. The mutable conditions then drive decisions about scaling.
References 83. Pedlosky, J., 1987. Geophysical Fluid Dynamics. New York: Springer. 84. Gibson, C., E. Ostrom, and T-K. Ahn, 1998. Scaling Issues in the Social Sciences. A Report for the International Human Dimensions Programme on Global Environmental Change. IHDP Working Paper 1. Bonn: IHDP. 85. Alexander, J. C., and B. Giesen, 1987. “From reduction to linkage: the long view of the micro-macro debate.” In: J. C. Alexander, B. Giesen, R. Münch, and N. J. Smelser (eds.). The Micro-Macro Link. Berkeley: University of California Press: 1–42. 86. Merton, R. K., 1938. “Social Structure and Anomie.“ American Sociological Review 3: 672–82. 87. Zelditch, M. Jr., 1991. “Levels in the logic of macro-historical explanations.” in: J. Huber (ed.). Macro-Micro Linkages in Sociology. Newbury Park, California: Sage: 101–106. 88. Parsons, T., 1954. Essays in Sociological Theory. New York: Free Press. 89. Meyer, J. W., F. O. Ramirez, and J. Boli, 1987. Ontology and rationalization in the Western cultural account. in: G. M. Thomas, J. W. Meyer, F. O. Ramirez, and J. Boli (eds.). Institutional Structure. Constituting State, Society and the Individual. Newbury Park, California: Sage: 12–37. 90. Mills, C. W., 1940. “Sitated actions and the volcabulary of motive.” American Sociological Review 5: 316–330. 91. Calhoun, C., 1991 The problem of identity in collective actions. in: J. Huber (ed.). Macro-Micro Linkages in Sociology. Newbury Park, California: Sage: 51–75. 92. Wittgenstein, L., 1953. Philosophical Investigations. New York: Macmillan.
48 MICRO/MACRO AND SOFT/HARD
93. Münch, R., and N. J. Smelser, 1987. A theory of social movements, social classes, and castes. In: J. C. Alexander, B. Giesen, R. Münch, and N. J. Smelser. (eds.). The Micro-Macro Link. Berkeley: University of California Press: 371–386 and 403–404. 94. Collins, R., 1981. “The microfoundations of macrosociology.” American Journal of Sociology 86: 984–1014. 95. Homans, G. C., 1961. Social Behavior: Its Elementary Forms. New York: Harcourt, Brace, and World. 96. Scheff, T. J., 1990. Microsociology. Discourse, Emotion and Social Structure. Chicago: University of Chicago Press. 97. Fischer, G., E. Kirk, and R. Podzun, 1991. Physikalische Diagnose eines numerischen Experiments zur Entwicklung der grossräumigen atmosphärischen Zirkulation auf einem Aquaplaneten. Meteorogische Runddschau 43: 33–42. 98. Von Storch, H., 1999. The global and regional climate system. In: H. von Storch and G. Flöser (eds.). Anthropogenic Climate Change, New York: Springer Verlag. 99. Starr, V. P., 1942. Basic principles of weather forecasting. New York: Harper. 100. Roebber, P. J., and L. F. Bosart, 1998. “The sensitivity of precipitation to circulation details. Part I: An analysis of regional analogs.” Monthly Weather Review 126, 437–455. 101. Appenzeller, C., T. F. Stocker, and M. Anklin, 1998. “North Atlantic Oscillation dynamics recorded in Greenland ice cores.” Science 282: 446–449. 102. Mann, M., R. S. Bradley, and M. K. Hughes, 1998. “Global-scale temperature patterns and climate forcing over the past six centuries.” Nature 392: 779–789. 103. Hasselmann, K., 1976. “Stochastic climate models. Part I Theory.” Tellus 28: 473–485. 104. Von Storch, H., J.-S. von Storch, and P. Müller, 2001. Noise in the Climate System – Ubiquitous, Constitutive and Concealing. In: B. Engquist and W. Schmid (eds.). Mathematics Unlimited – 2001 and Beyond. New York: Springer-Verlag: 1179–1194. 105. Hansen, A. R., and A. Sutera, 1986. On the probability density function of planetary scale atmospheric wave Amplitude. J. Atmos. Science 43: 3250–3265. 106. Nitsche, G., J. M. Wallace, and C. Kooperberg, 1994. “Is there evidence of multiple equilibria in the planetary-wave amplitude?” J. Atmos. Science 51: 314–322. 107. Stehr, N., 2002. Knowledge and Economic Conduct: The Social Foundations of the Modern Economy. Toronto: University of Toronto Press. 108. Giddens, A., 1990. R. K. Merton on structural analysis. In: J. Clark, C. Modgie, and S. Modgie (eds.). Robert K. Merton: Consensus and Controversy. London: Falmer Press: 97–110.
SCALING IN INTEGRATED ASSESSMENT 49
109. Merton, R. K., 1975. “Social knowledge and public policy. Sociological perspectives on four presidential commissions.” In: M. Komarovsky (ed.), Sociology and Public Policy. The Case of Presidential Commissions. New York: Elsevier: 153–177. 110. Bauman, Z., 1990. Thinking Sociologically. Oxford: Blackwell. 111. Stehr, N., 1992. Practical Knowledge. London: Sage. 112. Mannheim, K., (1929). Ideology and Utopia: An Introduction to the Sociology of Knowledge. London: Routledge and Kegan Paul. 113. Houghton, J. T., Y. Ding, D. J. Griggs, M. Noguer, P. J. van der Linden, and D. Xiaosu (eds.), 2001. Climate Change 2001: The Scientific Basis. Contribution of Working group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Cambridge: Cambridge University Press. 114. Connolly, W. E., 1983. The Terms of Political Discourse. Princeton, New Jersey: Princeton University Press.
4 Scale and Scope in Integrated Assessment: Lessons from Ten Years with ICAM HADI DOWLATABADI Sustainable Development Research Institute, University of British Columbia, Vancouver, British Columbia, Canada
Abstract Scale has traditionally been thought of in terms of the spatial extent and units of observation in a field. This is an excellent convention in the study of physical processes where scale also differentiates between the dominant forces at play. For example at the scale of planetary distances gravitation is the dominant force of interaction and the only thing that matters is mass, while at the atomic level electromagnetic forces dominate and charge of the bodies is critical. In this paper I would like to offer other criteria for scale selection in studies involving the interaction of social and natural systems. In this paper the focus is on integrated assessments where we hope to understand and capture the interaction between natural and social systems. By applying the same paradigm for scale identification as before, namely factors that dominate the dynamics and landscape of the system I would like to persuade the reader that we need to define two additional scales for integrated assessments: one to capture human cognitive processes and another to capture our social organization. The rationale for wanting to add these scales is simple. Awareness of the interface between nature and us is determined by our cognitive processes and technologies invented and employed to enhance these. Our ability to act on what we would like to do about the interface is shaped by the way our societies are organized and institutions invented and maintained to enhance them.
Acknowledgements I would like to thank the organizers of the Matrix 2000 Workshop on Scale in Integrated Assessment for offering me this opportunity to think more
52 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
systematically about scale issues. I am grateful to Dale Rothman for his comments and suggestions to improve this paper. The research reported here was made possible through support from the Center for Integrated Study of the Human Dimensions of Global Change. This Center has been created through a cooperative agreement between the National Science Foundation (SBR-9521914) and Carnegie Mellon University, and has been generously supported by grants from the Electric Power Research Institute, the ExxonMobil Corporation, the American Petroleum Institute and Resources for the Future. All views expressed herein are those of the author, and all remaining errors are a reflection of his fallibility.
Introduction Often, it is asserted that human activity is leading to environmental impacts of unprecedented scale. Here, scale presumably means phenomena that extend through space and time. I am not sure though, if such statements are helpful in analysis of our interactions with nature. In absolute terms, these impacts are more extensive today than ever before. This is because: (a) many of our activities grow ever more synthetic, (b) measurements of change in the environment are more sensitive and (c) our more sophisticated understanding of the underlying processes leads to attributions of these changes to our own actions. For example, we invented CFCs as a miracle industrial gas and fluid. No such chemical existed in nature before and for three decades we used it without any knowledge of its potential environmental impacts. During that period, our growing understanding of environmental science allowed us to speculate and later measure the impact of CFCs in the stratosphere above Antarctica! Yes, our reach is global (as we understand the term “global” today). In relative terms history is replete with episodes of human action whose impacts extended to the boundaries of the contemporaneously known universe. Today, we know a good deal more about the earth. For Paleolithic villagers, the impact may have been limited to the valley in which they had lived for generations. Cognitively, these two are not differentiable in terms of “the known interface” between humans and the environment. What matters is whether what we know influences what we do. In general, while our impact is rarely limited to the space we know, the impacts we neither see nor postulate may exist rarely, if ever, limit our actions. Furthermore, there is little doubt that our actions today have impacts beyond the space/time we know. For example, the earth is an extremely bright source of radio emissions in the sky. Our radio, TV and telephone conversations are finding their way to the farthest reaches of the universe. The earliest TV signals are already more than 60 light years away. If and when we grow sensitive to the impact of these stray emissions into the electromagnetic spectrum of deep space we may choose to control these also.
SCALING IN INTEGRATED ASSESSMENT 53
Meanwhile, there is nary a concern about such impacts. In summary, the decisions about our patterns of activity are taken in recognition of what we know at any given time. This is an irrefutable aspect of humanity. The scale of our knowledge about change is determined by our cognitive capacity. Even though I think the above argument is pretty persuasive, others would argue that there is a fundamental difference between then and now. Today, we are witnessing motivations for human activity that extend beyond local geographic scales and there is an imbalance between “global demands” and “local capacity to provide” in a sustainable manner. This too has historic precedent. The empires often aggrandized in history books are all about institutions that imposed control over far flung resources. Arab, Ashante, Aztec, British, Chinese, Greek, Mayan, Mogul, Persian, Portuguese, Soviet and Spanish empires conquered vast areas of the known world, extending trade networks, imposing belief systems and collecting resources. Did they consider the impact of their demands on local systems and their viability? What is the difference between these historic empires and our current concerns about the Global Agreement on Tariffs and Trade (GATT)? Economic superpowers and global markets, even while well-meaning, embody a promise and an ill-wind that come about from mismatches in the scales of pressure and responses. The scale of human activity is determined by the characteristics of social organizations and institutions. There is no prerequisite that our knowledge of change (cognitive scale) and our knowledge of manipulation (organization scale) be spatially the same or dynamically in harmony. Scale mismatches are part and parcel of the struggle of living on earth as social beings. Exploiting these mismatches is a major determinant of heterogeneity between and within different societies. These heterogeneities are beyond those that can be explained in terms of each community’s endowments. Utilization of these resources impose different collateral effects (externalities) on individuals, households, neighborhoods, communities and regions. Forces external to the region are increasingly the determinant of demand for these resources. These tensions lead to an imbalance of pressure and response in different regions. Consequently, winners and losers emerge. These differences in outcomes are now shown to be linked to how “happiness” and “welfare” are experienced. Here again, our cognitive and perceptive functions determine how we evaluate outcomes. Happiness of individuals and communities are defined relative to outcomes for others. Thus, in these societies, happiness is essentially being a winner, relative to others, even when the world as a whole or a neighboring society may be worse off. There is a clear pathology implied in the above paragraph. It suggests that even awareness of adverse impacts on nature does not necessarily lead to wise action on the part of humans. This is not simply a restatement of a tragedy of commons problem. It is one that associates the individually important phenomenon of “winning in relative terms” with actions that can
54 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
knowingly lead to the overall deterioration of the commons. By extension we can fuel the fire of concern about systematic collapse – socially, economically and/or ecologically. My immediate reaction to such worries is if humanity is so afflicted, then why worry about its demise. On a more positive note however, I believe catastrophic outcomes are only likely under two circumstances: (a) where the system has become so homogeneous that the same affliction can spell doom for all, (b) where the different components of the earth system are so strongly interconnected that the failure of one will lead to the collapse of all. Winners and losers (be they ecological or social) define heterogeneity within a system and hence resilience to the first type of challenge. The interacting elements of the system (society, ecology, …) lead to ever changing conditions, but it is their self-perpetuating interactions that leads to an identifiable system being created. Therefore change and heterogeneity are part and parcel of a more stable system, even though individual state variables may be transitory and unstable. In other words, while Welsh coalmining communities are a feature of our past, the villages are still there, and the people are engaged in different activities and the ecology is again flourishing in species that are sensitive to mine runoff. Interestingly enough, even though systemic collapse on a local scale is all about us, we seem to be expending more effort in projecting and forestalling such futures as opposed to doing something about them now. This could be due to a number of reasons, among them: (a) collapse being more dreadful when unknown and more easily adapted to when in progress; (b) collapse being precipitated or marginalized by institutional dynamics. These simple observations, about our cognitive and institutional capacity, have led me to be skeptical about our ability to address climate change in a substantive fashion. Climate change is the poster-child of distant and uncertain concerns. Somehow, it is hard to imagine us finding a solution there where we continue to fail in removing the familiar blights of persistent hunger and trampled human rights.
Lessons from Developing ICAM I do not claim any knowledge about how the issue of scale has been treated in the literature. This paper is about a personal journey in integrated assessment of climate change. One might suspect that a global scale is all that such an assessment would be focused on. However, the nature of the problem, a global concern with differentiated local implications has led to successive iterations in which questions to consider and what solutions to explore. Over the period in question, the problem was redefined four times. Each redefinition had implications for scale and scope of the analysis – see Table 4.1. This iterative approach to problem solving has been discussed by Root and Schneider [1]. I am now convinced that for problems like climate change, a “right scale” of analysis, based on space and time, does not exist. I believe that the cognitive and organizational aspects of our societies act locally, but their “location” is
SCALING IN INTEGRATED ASSESSMENT 55
not coterminous with a spatial definition of locality. I believe that today, the high degree of exchange between geographic locations has led to shared goods, services, ideas, and social norms being the defining characteristics of a “locality”.
Table 4.1: Four successive generations of ICAM ICAM 0
Problem characteristics ■ ■
■
1
■
■
2
■
■
3
■
Uncertainty in outcome Uncertainty in efficacy of control measures Subjective perspectives on costs and benefits of control Outcomes are simulated as a consequence of parametric uncertainty Regional differences in driving variables manifestation of climate change and impacts of that change Outcomes are simulated as a consequence of parametric and structural uncertainties Regional differences due to aerosols emissions and climate change that would ensue The key question is no longer what range of outcomes can happen but what institutional framework can steer us clear of pathologically bad futures. This is explored through simulation of interactions between three groups: those who worry about nature, those who worry about development and those who are entrusted with policy implementation. These simulations explore the consequence of specific institutional frameworks
■ ■
Critical factors shaping outcomes Subjective views on costs Subjective views on benefits
■
Two geographical domains (high and low latitudes) used to depict differences in demographics, economics, and market/non-market impacts of climate change
■
12 world regions whose boundaries are defined by aerosol transport Differences in energy resources Differences in pollution control Differences in land use
■ ■ ■
■
■
■
Sensitivity to control cost signals and economic disruptions Sensitivity to impacts attributed to climate change Institutions for social interactions and for monitoring and managing the interface with nature
In this paper, I plan to offer some insights about scale and integrated assessment through examples from three generations of ICAM modeling. In these examples, I will try to illustrate the added value of cognitive and organizational scales as adjuncts to the more familiar time and space scales. I will present: ■
Cognitive scale issues, including how we define the relevant questions and the scales of analysis adopted for examining different options,
56 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
■
Organizational scale issues, including which aspects of the dynamics of the system are represented.
I conclude this paper with a plea for design of integrated assessments that reflects the different scales at which human cognition and organizations operate. I believe these to be fundamental scales at which we perceive changes in the world around us and translate our desires for its manipulation and preservation into specific actions and interventions. Questions, scope and scale It may sound trite, but in any research endeavor, it is important to have a clear question in mind. In pure research, the question is gradually crystallized as one gropes around for understanding.1 By contrast, in applied research, the objectives defined by a specific question identify the scope and scale of the analysis that should be undertaken. Often, we researchers fall between these two modes of inquiry and have difficulty defining the question appropriately. Failing to define the appropriate question often goes hand in hand with the research findings being ignored. I have tried to keep the research underlying ICAM relevant by repeatedly visiting the dual questions: What is good climate change policy? What is good policy if climate changes? The redeeming feature of these questions is that they are a constant reminder of the context in which climate change is taking place. In the examples offered below, I try to show how context dramatically changed the nature of the insights that could be gained from the study of climate change issues. However, while I thought this approach would provide the most useful information applied science could offer, it ignored whether such information was in demand by decision-makers. It also ignored whether such information would be easy to digest once demanded. Three factors distinguished integrated assessment of climate change at Carnegie Mellon University (CMU), and my experiences over the past decade. These differences were: ■ ■
■
We had little expertise in climate change science. We tried to define the relevant questions from a decision-analytic basis before starting the research program. We insisted on characterizing the uncertainties hampering informed and assured decision-making.
However, we were fortunate in having a team of colleagues who had substantial experience in developing integrated assessments starting in 1981. We 1
Just consider any thesis or research project you are familiar with. Did it start with the questions that it finally answered, or did the questions that were answered emerge from the process of doing the research? In my case, the research has always revealed questions that were unknown to me before the research was started. Furthermore, these questions needed to be answered before the initial objective could be addressed.
SCALING IN INTEGRATED ASSESSMENT 57
were also blessed with a strong research program on public perceptions of risk and systematic risk communication. Our department had cut its eyeteeth on problems such as acid rain [2, 3], local air pollution [4] and radon and electromagnetic fields [5, 6, 7, 8, 9]. These differences in starting points, spared our effort from a common misstep in climate change research – a narrow focus on climate change ignoring other forces of change acting on similar or shorter time scales. At first blush, a lack of domain expertise is a fatal flaw in any assessment effort. This is true under two conditions: (a) if we were shy about seeking-out expertise when needed. (b) if we were unable to identify when we needed such expertise. However, by not having experts in climate science, global economics, ecological and ocean dynamics we were not obliged to shape our effort to include a pre-existing set of models or adopt their natural scales into our integrated assessment. The advantage of not having to start with preexisting models developed to address questions that have differed in scope and scale can hardly be overstated. Of course the drawback has been that by not adopting pre-existing models we have had a more difficult time persuading the domain experts that the features of their knowledge relevant to climate change decision-making are being faithfully reflected in our integrated assessments. While an initial question capturing the challenge of climate change policy is simple to pose, its refinement into the concerns of different stake holder groups is critical to adopting an appropriate scale of analysis. This process of identifying the relevant questions involved many person-years of effort and took over six months to complete. In this exercise we explicitly identified different stakeholders and their varying ethical and political stances on climate change and climate policy (see Table 4.2). Thus, we defined the scope and scale of the problem that was not limited to dollar denominations of costs and benefits. This broader definition of variables relevant to decision-making and policy formation broadened our effort beyond reliance on economic “solutions” to the climate change problem. The next step was to figure out how different social and natural processes interacted. This was accomplished by developing influence diagrams of increasing sophistication through which we explored scale and scope issues relevant to climate change and its context. These influence diagrams were used to explore how a snapshot of interactions would vary from short-term and longterm interactions and how a diagram of regional interactions would differ from one with a global focus. Figure 4.1 reproduces an influence diagram proposed by Granger Morgan and refined by the research team late in 1991.
58 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
Table 4.2: Climate questions climate change policy (developed during the first 6 months of the integrated assessment program)* 1. What policy issues will drive the evolution of the climate change problem? How big an issue is climate change? 1. What is the relative importance of climate change issues, compared with other issues, faced by groups around the US and the world? 2. What is the likely relative weight in US and global decision making of those groups for which climate issues are of significant importance? (the answer allows us to identify “key groups” in the climate change issue) What are the alternative responses that might be used in dealing with climate change? 3. What options exist for avoiding or limiting changes through reducing the emissions of greenhouse gases and/or reducing anthropogenic changes to albedo and standing bio-mass? 4. What options exist for geoengineering to avoid undesirable climate change while continuing current loadings? 5. What options exist for adapting to climate change? 6. What are the relative advantages and disadvantages of alternative responses and their implementation strategies to “key groups?” ■ ■ ■
■
■ ■
■
■
■
What is the ethical acceptability of each to “key groups?” How well are the economic costs and risks of each known? How well are the political, social, ecological and other non-economic costs and risks of each known? What are the prospects for reducing key uncertainties about costs and risks of each through research? How well are the economic benefits of each known? How well are the political, social, ecological and other non-economic benefits of each known? What are the prospects for reducing key uncertainties about benefits of each through research? What are the most attractive options for each “key group” if groups must act alone? What are the most attractive options for each “key group” if groups could act collectively?
What choices will and should be made? ■
■
■
What are the most likely policy responses for each “key group” given their current “decisions making culture?” What are the opportunities for improving individuals or collective outcomes through various policy interventions? If collective action is required, how is it best achieved?
SCALING IN INTEGRATED ASSESSMENT 59
Table 4.2: continued 2. What are the determinants of how various “key groups” value the effects of climate change and of possible policy interventions and reach decisions about them? What ethical framing does each “key group” apply in addressing these issues? What is mankind’s relations to nature, to time, etc.? ■ To what domain of issues is “economics” considered applicable? What issues ■ are framed in terms of rights, duties, etc.? What are the conflicts and contradictions between the ethical framing adopted ■ and the constraints of physical reality? What views does each “key group” hold about the nature of the climate and earth/biological system? 1. How is economic analysis done? How should it be done to be consistent with the group’s basic ethical framing? ■ ■ ■ ■
How are various things priced or otherwise valued? How should they be? How are things valued over long time periods? How should they be? How is aggregate economic performance measured? How should they be? How are large (non-incremental) changes evaluated? How should they be?
What are the mechanisms for collective decision-making and dispute resolution? How are decisions made within the group? ■ How are decisions made between groups? ■ What options exist to “improve” collective decision-making and dispute ■ resolution? 3. What are the human activities that can modify climate? What are the emissions and activities of concern? How are they distributed geographically? How much have they changed in the past? How might they change in the future (absent any intervention)? How do they compare to non-anthropogenic emissions? 4. What changes in climate will occur? What are the measures of global climate change? How do greenhouse gas emissions cause changes in these measures? What factors affect the rate of change? Has change already resulted from human activity? What magnitude of changes might result from increases in future emissions? 5. What effects will this climate change bring? What is the magnitude of sea-level rise that might occur? What is the effect on agricultural crops? What is the effect on managed and unmanaged ecosystems? What is the effect on water supplies? How do the above effects change the prospects for (a) goods and services, (b) human health, (c) how time is spent, (d) the physical environment, (e) social justice, and (f) other social interactions? * Note that in contrast to many lists of “climate questions,” this list begins with policy issues and moves on to matters of straight science only in its later stages.
60 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
Figure 4.1: A global influence diagram of the climate change problem (see textbox for further explanation).
Finally, decision-analysis requires an understanding of the uncertainties relevant to each choice. We spent much of our first year recognizing that there were parametric and structural uncertainties in our understanding of the natural and social systems that make up life and its environment on earth. We also noted variability among different actors in how different aspects of life are valued. These uncertainties were so large that we spent the first eight months of the
SCALING IN INTEGRATED ASSESSMENT 61 This framework treats the problem as being divided into four broad domains. These domains are, from left to right: the setting of the climate problem, the apparatus of the human decision making and sustenance, the options available for dealing with the climate change issue; and the natural system. • The setting of the climate problem has five major components. These are: i) The stock of knowledge, science, and technology. ii) The values held about change in various elements of the human and natural system. iii) The structural of social institutions & decision processes. iv) The stock of resources, which are a "god given endowment." At any given time only a fraction of this is available to the allocation model. The size of this fraction is determined by the state of our knowledge and technology. v) Acts of god which occur in both the natural and the human systems and can be characterized as volcanic eruptions and revolutions respectively. • The human sustenance and decision making apparatus is made up of three elements. These elements are: i) Humans perceive phenomena, understand these and identify potential problems/hazards. ii) The latter initiate an enquiry which requires assessment of a value for the various consequences being pondered. iii) The valuations dictate a paradigm and (possibly a strategy) for resource allocation. • The resources available can be distributed among the various options based on the valued consequences and the current state of knowledge. The options available at any given time fall into five categories: i) Invest in more R&D and learn about the potential problem and possible solutions. ii) Continue with the use of resources and economic development. iii) Adopt a GHG and land use change abatement policy, so that the magnitude of climate change can be kept in check. iv) Adopt a strategy for adapting to climate change of a given magnitude. v) Engage in geoengineering options designed to keep climate parameters of consequence within prescribed bounds. • These actions will all have impacts on the natural system. Here this system is divided into two elements: i) The climate system. ii) The environment. Finally it should be noted that changes in the climate system and the environment are either directly or indirectly (via economic impacts of change) picked up by the human decision making and sustenance apparatus.
project wondering whether a quantitative model of climate change would serve any useful purpose when uncertainties were so large. However, we as a group felt more comfortable using quantitative rather than qualitative modes for expressing our ideas and we did build a series of models to discuss the useful and the feasible features of such models. By this stage we were ready to discuss an iterative approach to integrated assessment that would start with the simplest framing that captured the relevant features of the problem and cycled through different iterations to refine the information needed to answer specific questions [10]. Having defined the scope of our research through these questions, we had to adopt a scale at which to study them. An overwhelming feature of the questions listed in Table 4.1 is that climate change and its impacts, as well as the various response policies we may adopt (i.e., adaptation, mitigation and geoengineering) will be manifested in different
62 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
ways in various regions of the world and perceived differently by different people. This heterogeneity in outcome and in perceptions about these outcomes has dictated the scale adopted for ICAM. Our research group’s conclusions about the scale and scope of the climate problem were first published in reaction to the analyses already in hand. The most prominent studies of climate change and policy up to 1991 featured: ■ ■ ■
■
unitary global actors, time horizons spanning a century or more, no admission of uncertainties in understanding of possible climate change processes, and, no admission of how options to abate greenhouse gas emissions, adapt to climate change impacts or engineer the climate system would be valued by different interested parties.
We were certain that it would be erroneous to start an integrated assessment with these assertions [11, 12] and hence developed a framework explicitly representing differences in subjective perspectives on costs and benefits of climate policy and uncertainties in climate science [13]. In this particular study (ICAM 0), the scale at which the problem is resolved is that of the values humans bring to climate change mitigation and climate change impacts. Perceptions, impacts and adaptation The most important driver of our evolving perspective on scale has been the issue of subjective perspectives and heterogeneity of experience. In other words, even though we are considering global climate change, different locations will experience different changes to their climate and different people will evaluate the desirability of the same change in their climate differently. The initial steps to reflect these realities in integrated assessments led to Integrated Climate Assessment Model (ICAM) 0. In this version, while climate change was globally homogenous, nine different perspectives could be entertained on climate policy costs and climate change impacts [13]. We found that subjective perspectives dominated scientific uncertainties in such analyses. Today, a decade after we first framed this question, while scientific opinion has gradually solidified around the reality of anthropogenic climate change, the policy divide has remained intractable. There are those who believe mitigation is more costly than realizable benefits, while others believe the benefits are so large that draconian controls must be undertaken immediately. There is fundamental polarity of opinion on these issues. This polarity drives the dynamics of any policy initiative regarding climate change. While our efforts have been aimed at providing higher and higher resolution scientific models of climate change, we do not know how such increased detail affects the perspectives of different individuals. What we need is a better understanding of how additional information may change the positions adopted by the different parties. In a democracy, the challenge we face is to provide credible
SCALING IN INTEGRATED ASSESSMENT 63
and targeted information to move the debate forward. Finer geographic scale simply leads to modeling results (regardless of relevance or accuracy) being believed more readily by the public – simply due to their realistic depiction of familiar geographic outlines. In ICAM 1 we allowed for different manifestations of climate change in a world with two regions (low latitudes and high latitudes), even though the people in each region had similar judgements about climate policy and climate change impacts [14]. Since that time, successive generations of ICAM have had finer spatial and temporal scales, primarily to increase the accuracy with which specific spatial heterogeneity is reproduced. Fundamentally however, the original insights of ICAM 0 (which had no spatial scale) prevail – people’s subjective perspectives dominate scientific uncertainties in choosing an appropriate climate policy. Beyond the insights from ICAM 0, I have come to recognize that a focus on subjective experiences should dictate a much finer scale of resolution for realistic analyses of the issues at hand. Cognitively, change will be experienced on a local level. Slow trends, may never be consciously noted and are likely to be swamped by our efforts to adapt to the large and inevitable changes we all undergo – e.g., aging. In other words, while we age, our perceptions of what what is changing in our local and more distant environment and whether that is desirable evolves through time. For example, we recall much higher snow fall from when we were children (partly because we were shorter). We also tend to grow less tolerance of cold weather as we age, and reconsider the ideal climates to live in. One of our team members, Shane Frederick [15], explored how much people think they change as they age. For example, he asked 15 year olds how similar they thought they would be to their current selves when 50, and he asked 50 year olds how similar to their current selves they were as teenagers. His findings show that people expect significant changes in their personal activities and preferences over time. Many believed that they would retain less than half of their personality traits over the span of three decades [15]. These levels of personal change are clearly far larger than environmental changes that we seek to forestall or prepare to adapt to. Therefore, it is not unreasonable to expect almost sub-conscious adaptation to environmental trends that take place more slowly than the personal changes we inevitably experience. I am not suggesting that all slow changes in environmental conditions go undetected, but some are likely to be lost in the noise. Furthermore, when the underlying factors shaping environmental conditions are complex, the interpretation of detected changes is open to misinterpreted. For example, changes in local land cover and air pollution affect local climates. In Pittsburgh, average annual temperatures fell by 3 °C from 1950 to 1970 and rose again by a similar amount from 1970 to 2000. When surveyed, the public was perhaps too young to have experienced the former cooling trend but readily noted the warming trend. Furthermore, they readily attributed this warming to
64 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
“global climate change.” It is more likely though that the warming in Pittsburgh is due to much lower atmospheric concentrations of sulfate aerosols brought about by the demise of the local steel and coke industries and successful implementation of the Clean Air Act. Correct attribution is critical to taking appropriate actions. Today, our institutions have prepared the public to worry about climate change. Various phenomena will be tied to climate change by the public whether or not such conclusions are scientifically valid. This issue is even more pronounced for extreme events, and these leave lasting psychological impressions. Extreme events are easily detected and often attributed to phenomena that the public psyche has been primed to accept. The wrath of this or that god common in polytheistic cultures is in response to some human misdeed. Today’s sin is often assumed to be our despoiling of the environment, but the wrath remains the same: storms, floods, draughts and so on. Studies of the available evidence do not support the notion that extreme events have changed dramatically in their frequency and severity over the past century [16]. Nevertheless, over the past decade extreme events have routinely been attributed to anthropogenic climate change. Our social organizations have adopted climate change as their favorite cause for any detected changes in adverse environmental conditions. Furthermore, improved newsgathering and dissemination has allowed information about extreme weather events (which are essentially local in nature) to be broadcast worldwide. This has blurred geographic factors (which have a significant bearing on the possibility of specific extreme events occurring locally) and contributed to the sense of public dread about climate change. This provides further evidence of cognitive dimensions of the problem dominating geography. Social institutions not only play a role in our interpretation of environmental changes, but also the dominant modes of response. As noted above, there is little statistical support for the notion that extreme weather events have increased in their frequency or severity. There is little doubt that they would persist whether there is a climate treaty or not. Sadly, the global response to these events has not been a call for better warning and response measures for dealing with extreme weather events (especially in less industrialized countries), but a clarion for mitigation of climate change. Schelling [17, 18, 19] is eloquent in arguing for promotion of development as the most effective measure to address the vicissitudes of climate. However, environmental activists use the extreme events to feed the fires of our remorse for being insatiable consumers of the earth’s exhaustible bounty. Integrated Assessments are about how problems are framed and solutions explored. The scale of analysis needs to reflect who is active in proposing interventions to protect life and property and what measures they are promoting. An early warning system and response needs to be adapted to local geographical conditions and suitable for implementation using local resources. A global carbon dioxide control strategy rarely considers factors beyond large players in the energy markets.
SCALING IN INTEGRATED ASSESSMENT 65
Sea level rise revisited One of the more thoroughly studied impacts of “global warming” is sea level rise. Global mean sea level is expected to rise due to thermal expansion and a net release of water from glaciers worldwide. There is no doubt that rising sea level will inundate low-lying lands.2 However, there are three processes of local importance that will define our capacity to adapt to sea level rise. The scale and scope of climate change assessment must be sufficiently fine and broad to capture these processes. Otherwise, the information provided to decision-makers will be erroneous. These three processes are: ■
■
■
Factors affecting relative sea level in specific locations (physical and cognitive) Factors affecting local development of coastal areas (cognitive and institutional) Factors affecting recovery from storms (cognitive and institutional)
Figure 4.2 depicts local relative sea level for four different locations. The differences between these are staggering. The local sea level is rising 18 times faster near Bangkok than near Mumbai. The reason behind the dramatically faster local sea level rise near Bangkok is fresh water withdrawal and surface water management causing land subsidence. It is clear that in this case sea level rise due to climate change is a secondary issue. Rapidly growing demand for fresh water is not unique to Bangkok. In many low-lying islands and coastal areas the same challenge exists. The problem is local. The condition is repeated globally. Thailand is in the process of implementing programs promoting water table recharge to slow local subsidence. It is not clear that in other locations encountering the same challenges, the institutions to respond are in place. Coastal developments are another example of a local phenomenon globally repeated. Safe harbors gave shelter from storms and made it possible to have a sustained fishing effort or establish trade centers at the mouth of inland waterways. If the location was not safe it could not prosper long. Now, shorelines are being developed for leisure and housing. Many of the locations being developed today were not developed historically because of their vulnerability to storms. Unfortunately, developers rarely consider this in locating new homes and communities. Their goal is to develop property, sell it and move on. Their exposure is being hit by a storm between setting the foundation and handing the key to an unsuspecting immigrant to that location. This clearly increases exposure to extreme events on the coasts, but again, the factor that is leading to rapid increases in risk is not climate change but ill-considered location of new developments on the coast. 2
There is also no doubt in my mind that we will not implement a climate policy that will save the populations and resources at risk from sea level rise in the next few decades.
66 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
Figure 4.2: Local trends in relative sea level Figure 4.2a shows that while the long-term trends in monthly tide-gauge data for Mumbai suggest a relative sea level rise of .7 mm per year. But the raw data suggest two periods of relative stability in relative sea level with more rapid changes in sea level during periods of transition from one level to another. In figure 4.2b we observe that tide-gauge data for Churchill, where the glacial rebound of the Canadian Shield is argued to be continuing leading to relative fall in sea level. In figure 4.2c we can observe that tide-gauge data in Kobe, Japan are recording rapid changes in relative sea level with the trend changing sharply in the wake of the massive earthquake suffered there in 1994. Figure 4.2d reflects tide-gauge data for Pom Prachum in Thailand where surface water management in Bangkok and withdrawals from the water table have influenced a much more rapid rise in local sea level in the past four decades. The rate of change in sea level here is 18.5 times faster here than in Mumbai. Data: Permanent Service for Mean Sea Level, UK
The news is not all gloomy. The ability to forecast storms and the ability to move people out of harm’s way have improved dramatically over the past century. From 1900 to 1910 more than 8000 people perished in storms pounding the coastline of the United States. During the 1990s, this number had fallen to 1/50th of 80 years earlier. Meanwhile, property damages quintupled from $4 billions in the 1930s to $20 billion in the 1990s. This is because so much more property was placed in harm’s way in coastal developments.
SCALING IN INTEGRATED ASSESSMENT 67
How much damage is inflicted by storms in the future is dictated by how much of our resources we place in harm’s way. This is a repeated game. Each storm brings information about places where property is at higher risk. We can reduce future exposure to this risk by not rebuilding there. This iterative process of locating our developments where they are safe from storms is key to reducing the impacts of future sea level rise. In our studies of impacts and adaptation we have learned that where it is possible to create institutions to “learn and respond to natural extremes” impacts from climate change can be significantly reduced [20]. We have learned three important lessons from this work: ■
■
■
Local development patterns determine the initial conditions and exposure to risk. There is no reason to believe that current development patterns are optimal in their reduction of risk to coastal dwellers. Local regulations governing recovery from storm damage determine the persistence and cumulative damage from storms through time. Where storms send strong signals of inappropriate development, rebuilding is unwise. If regulations limit such rebuilding, total damages from storms and sea level rise combined can be reduced by an order of magnitude over a century time-scale. If rebuilding regulations prohibit rebuilding in risky locations cumulative damages from small storms far exceeds those from large storms. Should climate change lead to more extreme storm events, the long-term impacts in coastal areas will be lower.
These insights all point to the role our perceptions play in the design of institutions created to address our concerns. We worry about households who suffer the impact of coastal storms and riverine flooding but create institutions that often help them rebuild their property in harm’s way. We worry about the impact of more severe storms, but it is the small and frequent storms that inflict the greater cumulative damage. Perceptions are critical to our ability to recognize what contributes to the risks we face and how best to reduce these. Climate change impact assessments need to be developed with a scale and scope appropriate to capturing the essential features of human perception of natural events and their impacts, and how best to limit their initial impacts and recover from their consequences. Energy markets and technological progress I would like to use energy markets and technical change to highlight the issue of co-existing and competing scales of organization as a fundamental feature of social systems. The single most important factor in determining future atmospheric concentrations of greenhouse gases is technical change. Technical change determines the pattern and extent of economic activity. Technical change determines the types and magnitude of resources we harness to meet economic needs and our expectations about lifestyles. Climate policy is our attempt to influence
68 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
the direction of technical change so that a given level of economic activity can be achieved at lower levels of greenhouse gas emissions. Interestingly though, technical change is more often than not treated as an exogenous factor in studies of climate change [21, 22]. The inadvisability of treating technical change as an exogenous variable aside, scale plays a significant role in how technical progress evolves. In ICAM-3, technical change has been formulated as an endogenous process [23]. I am a believer in the old saying: “necessity is the mother of invention.” Therefore, I believe that purposive technical progress is brought forth to solve a perceived problem. Scale enters the picture because of the way in which I believe technical progress is diffused. For example, whenever energy prices rise, technical change is unleashed to come up with a solution. But there are at least two solutions to this challenge: a) discovery of lower cost ways to produce energy, b) search for more efficient ways of using energy. I believe that evolution of the pattern of energy use is then shaped by competition between technological innovation and diffusion on the supply side and technological innovation and diffusion on the demand side of energy markets. There are however, significant differences in organizational coherence on the two sides of this market. This, difference in organizational scale leads to a particular pattern of dynamics that needs to be taken into account when considering long-term policies affecting energy use. Energy supply is a fairly concentrated and large-scale activity. Energy costs are the primary concern of this industry and their innovative activities are unlikely to be captured by other concerns. Therefore, innovations are directly aimed at improving energy discovery and production and are rapidly adopted when needed. On the demand side however, energy is used in order to gain a large variety of services and labor savings. For the innovator and adopter, the services delivered are the primary concern and energy use is secondary. Furthermore, the scale at which technology is adopted is household-by-household and business-by-business. Therefore, adoption is a far slower process and rarely motivated by energy (or carbon) saving considerations. This leads to a particularly interesting dynamic process of technical progress and diffusion in energy markets. In the wake of a crisis that raises energy prices, there is innovative activity in both the supply and the demand technologies. However, the more rapid adoption of technical breakthroughs on the supply side lead to more plentiful supply (of a resource or its substitute) and lower energy prices. One example of such technical breakthroughs is in oil drilling and production. We are now able to direct the drilling process in any desired direction. When this capability is combined with monitoring of chemical gradients in the well, the drill can be piloted towards the smallest of reservoirs. This has permitted economic oil recovery from reservoirs previously considered too small to exploit or even include in reserve assessments. Rapid adoption of technology on the supply-side often lowers energy prices before the technologies promising better end-use energy efficiency are broadly adopted in the market. Such
SCALING IN INTEGRATED ASSESSMENT 69
technological progress is not lost, but is more often used to deliver a wider range of services for which energy is being used.3 Here the diffusion of technical progress may be slower, but can persist even when energy prices are low or falling. The reason for this paradox is that the technical progress in question (e.g., variable valve timing for internal combustion engines) is no longer solving the problem for which is was invented (higher fuel efficiency), but by providing more services (a broader and higher torque curve from the same engine displacement) is a weapon in the auto industry’s competition for the consumers’ pocket book. Scale, the study of climate policy and its evolution At the outset I argued that climate change is not a problem of unprecedented scale. I argued that humanity has a long history of affecting its environment to the limits of its known extent. In contrast to climate change, I believe organizational scale and persistence needed to implement an effective climate policy has no historic precedent. Greenhouse gases are long lived. The climate system responds with a lag of something between 10 and 50 years. The dynamics of terrestrial ecosystems and carbon storage are on the century time-scale and ocean processes have elements whose temporal extent can span more than a millennium. In order for us to entertain a successful solution to the climate problem, we need to recognize the required longevity of an effective policy. Such a policy needs to stabilize greenhouse gas concentrations in the atmosphere. Doing so requires century-scale persistence in control of emissions. Few human endeavors have spanned such time-scales unchanged. Political systems rarely last more than a decade. Even fundamental social movements rarely last more than half a century. For example, consider the social programs that shaped the governments of Europe in the decades following WWII. All have come to be reviewed and redefined in the closing decade of the twentieth century. Sovereign nations also seem to have a longevity that rarely exceeds a century.4 It is hard to imagine how a climate policy will be made stable over such long timescales. Even the signatories to the treaty will change over its requisite duration. The only feasible approach to making sure climate policy can survive this underlying pattern of instability is to make sure there are irreversible steps in the path to lower greenhouse gas emissions. This irreversibility would ensure continuation of reduced emissions even when the forces making climate policy desirable fail to see through their vision.
3
Take the efficiency of internal combustion engines. Today’s, average engine is twice as efficient as an equivalent engine (of similar output) 30 years ago. However, as energy prices stabilized and then fell during the 80s and 90s, the performance of engines was nudged up, almost every year, in order to attract customers. 4 Religious movements probably last the longest, but evolve considerably over time.
70 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
Beyond unprecedented longevity, a successful climate strategy needs to involve all major emitters of greenhouse gases. Without the participation of the OECD, Former Soviet States, India and China, emission reduction efforts have little chance of assuring stabilization of greenhouse gas concentrations in the atmosphere. At present, climate change is one of the last concerns of most less-industrial countries. They face the danger of instability as they have raised the expectations of their populace with visions of plenty and wealth in the wake of liberalized markets and globalized trade. It will be difficult to meet these expectations. It may be possible to implement climate policy as a means of reducing expectations. Just as at times of war, the general public willingly adopts austerity and hardship in order to achieve a greater good. Whether this approach will or can be adopted remains to be seen. It is possible that a clever government can translate public concern about extreme events into adoption of a climate policy. Imagine a setting in which the above has been achieved and a fairly comprehensive emissions control program is in place. The public is likely to continue to associate extreme weather conditions as manifestations of climate change. How will they respond when we continue to have extreme events even after a decade or more of selfimposed austerity? I believe there will be strong local forces to break global compacts to control emission. Here again, cognitive forces dominate the dynamics of policy formation and dissolution. If a large enough party to the global climate accord steps away from the agreement, resuming a growing emissions trajectory, the burden of control for the remaining parties to the agreement can grow to the point of extreme economic discomfort and further defections. A domino effect would then take over and the mitigation policy would collapse. The mechanism and probability of such policy failures are reflected in ICAM-3 and discussed and elsewhere [24]. This is an example of how at the international level the crossscale organizational features of initiatives for climate policy implementation dominate the dynamics of their stability and success. An interesting aspect of the climate change challenge is that mitigation is not the only policy option. There is likely to be adaptation to climate change, whether or not there is a mitigation policy. This can be undertaken at different scales appropriate to representation of different aspects of climate change and its impacts. However, geoengineering of the climate system is also a possibility. A possibility that can be launched unilaterally by a nation that perceives a sufficient threat from climate change. The requisite technology is not too sophisticated, the scale of the impact can be limited to one region and the persistence of the effect can be as short as a few weeks. This policy can be launched with little prior preparation, yielding almost immediate relief from some aspects of climate change. In summary, different policies require different scales of participation and persistence. For mitigation (the policy most often talked about) to be successful requires a scale of participation that has no historic precedent.
SCALING IN INTEGRATED ASSESSMENT 71
Conclusions on Scope and Scale The examples I have offered suggest that even in the study of a global process such as climate change the scope should be expanded to include local phenomena, such as local changes in sea level or initial conditions for coastal developments. In other words context matters a great deal in how climate change impacts will emerge and how well we can cope with these. There is broad agreement that multiple stresses acting on the system simultaneously are where we should be focussing our attention. We are now in a better position to realize that interactions between different stresses and the remedies we adopt in dealing with these are how we change the profile of our vulnerability [25, 26]. I worry that the narrower scope of initiatives such as the Intergovernmental Panel on Climate Change have led to too much focus on climate issues and insufficient attention to other processes of environmental and social change. I also believe that there has been a misallocation of human capital to the study climate related issues while these other issues loom larger and more immediate. Therefore, I fear that the narrower focus on climate change, adopted by so many of us, has needlessly limited the generation and delivery of appropriate scientific information to the decision-makers responsible for shepherding local, regional and national development plans. The focus on crafting a global accord on climate change has led to a political impasse that, if ever made substantive, is unlikely to deliver a solution that will be embraced by the industrialized and non-industrialized nations alike. Meanwhile, we could have tried to craft an accord on a global effort to deliver potable water, or sanitation. We could have launched a program on mapping global natural hazards and put into place institutions that can limit our vulnerability to these. Unfortunately, by continuing to focus on Kyoto, these and other opportunities to implement good policies whether or not climate changes are being lost every day. The insight I hope to illustrate with the examples above is the importance of understanding and representing interacting processes at appropriate scale(s). All too often, the different sides of an equation (or system in dynamic tension) are represented as being at the same scale. The social sciences are an aggregation of scholarly studies at different scales from the cognitive psychologists who focus on the individual, to organizational behaviorists who study groups of people aiming to achieve a specific goal, to social and political scientists who study our interactions at higher levels of aggregation. What makes the social sciences so very difficult is that under the appropriate conditions, observed phenomena are under the influence of forces at many different scales. Unlike the natural sciences (specifically Physics) where at a given scale, one force of nature dominates interactions, in social interactions cognitive processes of the individual are affected by the culture of the society and the society’s culture can be shaped under the influence of an individual’s
72 SCALE AND SCOPE IN INTEGRATED ASSESSMENT
thought processes. This is not simply true for public policy, it reaches deep into our psyche and permeates how we conduct research in both what we choose to study and how we interpret available empirical evidence [27]. In a sense this can be viewed as coming back full circle. After ten years of research, I am still asking, what is good climate policy? and what is good policy if climate changes? But at least I have some idea that the problem needs to be tackled using multi-scale analyses that reflect human cognitive and organizational issues as well as the scales at which natural processes operate.
References 1.
Root, T. L., and S. H. Schneider, 1995. “Ecology and climate: Research strategies and implications.” Science 269: 331–341. 2. Rubin, E. S., 1991. “Benefit-Cost Implications of Acid Rain Controls: an evaluation of the NAPAP Integrated Assessment.” Journal of the Air & Waste Management Association 41: 914–921. 3. Rubin, E. S., M. J. Small, C. N. Bloyd, and M. Henrion, 1992. “An Integrated Assessment of Acid Deposition Effects on Lake Acidification.” Journal of Environmental Engineering, ASCE 118: 120–134. 4. Russell, A. G., H. Dowlatabadi, and A. J. Krupnick, 1992. “Electric Vehicles and the Environment.” Energy & Environment 3: 148–160. 5. Henrion, M., and M. G. Morgan, 1985. “A Computer Aid for Risk and Other Policy Analysis.” Risk Analysis 5: 195–208. 6. Morgan, M. G., M. Henrion, S. C. Morris, and D. A. L. Amaral, 1985. “Uncertainty in Risk Assessment.” Environmental Science and Technology 19: 662–667. 7. Henrion, M., M. G. Morgan, I. Nair, and C. Wiecha, 1986. “Evaluating an Information System for Policy Modeling and Uncertainty Analysis.” Journal of the American Society for Information Science 37: 319–330. 8. Bostrom, A., B. Fischhoff, and M. G. Morgan, 1992. “Characterizing Mental Models of Hazardous Processes: a methodology and an application to radon.” Journal of Social Issues 48: 85–100. 9. Morgan, M. G., B. Fischhoff, A. Bostrom, L. Lave, and C. J. Atman, 1992. “Communicating Risk to the Public: first learn what people know.” Environmental Science and Technology 26: 2048–2056. 10. Dowlatabadi, H., and M. G. Morgan, 1993a. “Integrated Assessment of Climate Change.” Science 259: 1813. 11. Lave, L. B., H. Dowlatabadi, G. J. McRae, M. G. Morgan, and E. S. Rubin, 1992. “Making Global Climate Change Research More Productive.” Nature 355: 197. 12. Dowlatabadi, H., and L. B. Lave, 1993. “Pondering Greenhouse Policy (Letter).” Science 259: 1381.
SCALING IN INTEGRATED ASSESSMENT 73
13. Lave, L. B., and H. Dowlatabadi, 1993. “Climate Change Policy: The Effects of Personal Beliefs and Scientific Uncertainty.” Environmental Science and Technology 27: 1962–1972. 14. Dowlatabadi, H., and M. G. Morgan, 1993b. “A Model Framework for Integrated Studies of the Climate Problem.” Energy Policy 21: 209–221. 15. Frederick, S., 1999. Time Preferences. Department of Social and Decision Sciences. Pittsburgh, PA, Carnegie Mellon University. 16. IPCC, 2001. Third Assessment Report: WGI, Cambridge University Press. 17. Schelling, T. C., 1983. Climate Change: Implications for Welfare and Policy. Changing Climate. National Research Council. Washington, D.C.: National Academy Press. 18. Schelling, T. C., 1992. “Some Economics of Global Warming.” The American Economic Review: 1–14. 19. Schelling, T. C., 1994. “Intergenerational Discounting.” Energy Policy 23: 395–402. 20. West, J. J., and H. Dowlatabadi, 1998. On assessing the economic impacts of sea level rise on developed coasts. Climate, change and risk. T. E. Downing, A. A. Olsthoorn, and R. S. J. Tol. London: Routledge. 21. Azar, C., and H. Dowlatabadi, 1999. A Review of the Treatment of Technical Change in Energy Economics Models. Annual Review of Energy and the Environment. R. Socolow. Palo Alto, California, Annual Reviews Inc. 24: 513–543. 22. Weyant, J. P., and T. Olavson, 1999. “Issues in modeling induced technological change in energy, environmental and climate policy.” Environmental Modeling and Assessment 4: 67–85. 23. Dowlatabadi, H., 1998. “Sensitivity of Climate Change Mitigation Estimates to Assumptions About Technical Change.” Energy Economics 20: 473–493. 24. Dowlatabadi, H., 2000. “Bumping against a gas ceiling.” Climatic Change 46: 391–407. 25. Graetz, D., H. Dowlatabadi, J. Risbey, and M. Kandlikar, 1997. Applying Frameworks for Assessing Agricultural Adaptation to Climate Change in Australia. Canberra, Center for Integrated Study of the Human Dimensions of Global Change. 26. Risbey, J., M. Kandlikar, D. Graetz, and H. Dowlatabadi, 1999. “Scale and Contextual Issues in Agricultural Adaptation to Climate Variability and Change.” Mitigation and Adaptation Strategies for Global Change 4: 137–165. 27. Gould, S. J., 1981. The Mismeasure of Man. New York, NY, W. W. Norton & Co.
5 Scaling Issues in the Social Sciences 1
2
3
TOM P. EVANS , ELINOR OSTROM AND CLARK GIBSON 1 Department of Geography, Center for the Study of Institutions, Population and Environmental Change, Indiana University, Bloomington, United States 2 Department of Political Science, Center for the Study of Institutions, Population and Environmental Change, Indiana University, Bloomington, United States 3 Department of Political Science, University of California, San Diego, United States
Abstract The issue of scale is critical to the understanding of data collection, data representation, data analysis and modeling in the social and biophysical sciences. Integrated assessment models must acknowledge these scale issues in order to evaluate the utility and results of these models. This awareness of scale has been widely recognized in the physical sciences and a variety of tools have been developed to address these scale issues, although there is no general consensus on what tools to apply in what situations. Scale issues have been less widely addressed in the social sciences, but recent literature suggests an increasing awareness. This paper addresses the importance of scale issues to social data as they are related to integrated assessment modeling. A review of terminology related to scale issues is presented to address the vagueness and lack of consensus in this terminology. Scientists from a variety of social disciplines have addressed scale issues from different perspectives and these are briefly reviewed. Keywords: scale; spatial analysis; social science; integrated assessment Acknowledgements Institutional support from the Center for the Study of Institutions, Population and Environmental Change (NSF; SBR 9521918) is gratefully acknowledged. Bill McConnell provided helpful comments in the construction of this paper.
76 SCALING ISSUES IN THE SOCIAL SCIENCES
Introduction The importance of scale issues is widely recognized. Methods of dealing with the effects of scale are not, however, universally accepted. Cohesive theories of scale, with a few isolated exceptions, have not yet been developed, widely applied or widely accepted. Because scale issues pervade data collection and representation efforts, spatial data modeling, including integrated assessment modeling (IAM) is dramatically affected by these scale issues [1]. Both social and natural scientists acknowledge the importance of scale effects and how relationships and processes operate differently at different scales [2, 3]. Because of the varying nature of social-biophysical interactions as a function of scale [4], many researchers have concluded that a multi-scale approach is necessary to understand the relationships between variables or the function of social and biophysical processes [2, 5]. While scale effects are widely acknowledged, a considerable amount of research in integrated assessment and global change research is still conducted at a single scale of analysis using data collected at single scales. Important conclusions are drawn about the relative impact of different factors using only household, regional or global scale analysis. This dependence on a single scale is understandable. In some cases, data availability may limit the ability to examine relationships across scales, particularly for regional and global extents. In other cases, labor resource limitations may make a multi-scale approach infeasible. A continent-scale examination of the impact of population growth on deforestation can hardly expect to use household level data as a social unit of analysis for such a large geographic extent. Yet, it is critical that researchers pursue research questions that operate at large scales where these data availability problems and resource limitations predominate. In these cases, researchers must attempt to hypothesize about the impact of different operational scales on the phenomenon they are studying. This paper explores how some social scientists have acknowledged the importance of scale issues, the various meanings of scale in different social science disciplines and the implications of scale issues to social science within the realm of IAM. The first section of this paper discusses the various terminology related to scale issues from different social science disciplines. The second section of this paper reviews how different disciplines have developed different meanings of “scale”, drawing from Gibson et al. [3]. Section three discusses the implications of scale on social data collection focusing on individual, household, community and regional levels of aggregation. The following two sections review specific methods of dealing with scale developed from the fields of the social and natural sciences. The paper closes with a discussion of how these components relate to data collection and representation for integrative assessment and integrated assessment modeling.
SCALING IN INTEGRATED ASSESSMENT 77
Scaling Terminology The vagueness of the terms “scale” and “level” contribute to the difficulty of developing universal theories of scale effects [6]. While the meaning of “scale” varies across (and within) disciplines, there are common threads within these different meanings. Scale can be referred to as the spatial, temporal, quantitative, or analytical dimensions used by scientists to measure and study objects and processes [3]. Levels refer to a region along a measurement dimension. For example, the terms micro, meso, and macro refer in general to regions on spatial scales referring to small, medium, and large-sized phenomena. To be precise about the concept of scale, one needs to refer to the concepts of extent and resolution. Extent refers to the magnitude of a dimension used in measuring some phenomenon. Spatially, extent may range from a meter or less, to millions of square meters or more. Temporally, extent may involve a second, an hour, a day, or even a century, a millennium, or many millennia. The extent of a scale establishes the outer boundary for what is being measured. Resolution refers to the precision used in measurement. The term “grain” is used to refer to the smallest unit of resolution along a particular scale. Social scientists use a variety of resolutions in their measurements. In regard to time, physical scientists frequently use extremely small units of time when measuring physical processes. Most social scientists, on the other hand, rarely use a resolution of less than an hour. Such a unit would only be used when timing groups of individuals performing particular tasks such as labor allocation. Many types of social science data are recorded on an annual or decadal basis. In this paper, we will use small scale to refer to phenomena that are limited in their spatial, temporal, or numerical extent and large scale to refer to big quantities or space. This is how many people understand the term in everyday usage, but it is exactly the opposite of the way the terms are used by cartographers. For integrated assessment modeling, choices about scales, levels, extent, and resolution affect what data is collected, how it is calibrated, what data can be used for validation, and what are the basic units that can be used in a model of a process. For example, a researcher must determine what the appropriate level of analysis is when examining the relationship between changes in agricultural production and climate change. In developing countries, land management decisions are often made at a household level, yet most climatechange models are focused at a regional level of analysis. It is these cross-scale issues that need to be reconciled within integrated assessment modeling. It is important to distinguish between how scale issues relate to data collection versus data representation and how scale related terminology refer to both areas in the social sciences. In terms of data collection and cartographic representation, scale implies a representative fraction related to portraying data in the real world on a map (e.g., topographic maps showing village
78 SCALING ISSUES IN THE SOCIAL SCIENCES
locations and road features) The implications of map scale are common across disciplines and well documented. For example, scale affects the representation of a road used for market accessibility in the same way scale affects the representation of a stream or contour line. In the social sciences, scale can also refer to the social unit of analysis. For example, socio-demographic data are collected at the individual, household and community level. This social unit of analysis is critical in the observation of certain processes. For example, in many rural environments, land-use decisions are made at the household level within a regional context. Individual migrant decisions are made within the context of the household. Both cases are examples of lower-level agents or actors being affected by higher-level forces or factors. While social data are commonly collected at one of the above social units of analysis, it should be noted that these terms (individual, household, community) mean different things in different areas. For example, the meaning of a household differs dramatically between cultures. Land settlement patterns and institutional regimes affect how a community is defined and what the implications of a community arrangement are for decision making and vice versa. It is also important to understand how the terms “scale-up” and “scaledown” apply to social science data. Curran et al. [6] has identified how these terms alone are vague and that specifying between what scales data are being transformed is a more precise method of description. Household level data are commonly scaled up to a regional level representation such as with census enumeration units. Household level data are also commonly scaled up to a community level. With this scaling-up, data variability is clearly lost as with any data aggregation procedure. It should also be noted that different social variables are affected by this scaling-up differently. For example, household size scales-up to the community level population totals with little impact on the precision or accuracy of those data (assuming no households are excluded in the household survey). In contrast, data on ethnic composition or religious affiliation does not scale-up well as these are nominal data attributes. In addition, scaling-up or aggregating data on labor allocation and land-use from the household to the community level loses the complex household dynamic that exists between these variables. Social science researchers also sometimes use data at a higher scale to represent a social unit at a lower scale. For example, social data at the census block or block-group level (U.S. census enumeration units analogous to groups of city blocks) can be mapped to households within those enumeration units. For example, hospitals collect address information for patients and addressmatching procedures allows these addresses to be spatially located. The socioeconomic characteristics of the census unit an address falls within, such as educational background or median family income, can be assigned to a patient as a prediction of that patient’s characteristics. However, this downscaling or disaggregation clearly risks mis-representing households because
SCALING IN INTEGRATED ASSESSMENT 79
the higher-scale census block-group data does not indicate the variability within the census unit. Fuzzy data techniques, for example, can be used to represent with what reliability the data can be mapped to lower-scales or the boundaries within which a data value may likely fall, but these techniques still do not provide a level of accuracy or reliability were the data initially collected at the household level. Context of Scale Issues in Social Research Following Curran et al. [6], who conducts a similar review of scale issues relating to biology and ecology, we use the following propositions to discuss how scale issues relate to social science. Here we explore a set of statements revolving around social-biophysical relationships to establish a context for scale issues in IAM. In this discussion, we address where these statements hold for different types of social data and where they break down. ■
The small things are the ones that determine the characteristics of the living world
The complexity in social and natural systems is beyond the ability of researchers to represent completely. Yet these systems can be modeled through simplifications and generalizations of these systems. All systems can be broken down into components. Researchers strive to identify the components in the living world and the relationships among these components. One of the most difficult tasks in integrated assessment modeling is how to determine what level of complexity is needed in a model and what model components need more or less complexity. Just as the landscape-scale pattern and composition of forests can be broken down into the physiological characteristics of the individual species within the forest, the economic structure of a society can be broken down into the household dynamics of labor decisions and financial management. Likewise, it is the aggregate condition and situations of the households, communities and businesses within a region that determine the regional economy of that area. All land-use decisions are affected by the individuallevel in some way. In environments where subsistence agriculture is the major mode of production this is particularly the case. But even in areas with a higher degree of economic development, agricultural landholdings (e.g., large farms practicing agriculture with high inputs) may be managed by a relatively small number of individuals. One notable exception to this rule are publicly managed lands where land management decisions are made in the context of a set of political institutions regulating how that land may be managed. But, 1) public lands comprise a relatively small proportion of the earth’s surface and 2) while many institutional regulations are not necessarily the product of individual decisions, individual-level decision making does enter into the land-use equation at other points, such as where within a forest stand to selectively cut trees. For an integrated assessment model with a
80 SCALING ISSUES IN THE SOCIAL SCIENCES
biodiversity/ecological function component these lower-level individual decisions made within the context of institutional regulations at a higher-level are important. It is these lower-level decisions create landuse/landcover outcomes that in aggregate produce higher-level patterns and processes. ■
The small things are the ones most amenable to study by the methods of science
Regional economies are characterized by a summary of statistics for the businesses and individuals in that region. The mean cost of new housing in different regions is the product of the costs in different areas composing each region. While data are often reported at coarse scales, it is fine scale measurements that allow the generation of this coarse scale reporting. This statement can be extrapolated to temporal scales of analysis as well. With the exception of dramatic changes in landcover such as that associated with colonization in the Midwest United States in the 1800’s or the Brazilian Amazon in the 1970’s, landcover changes generally operate over long periods of time. An examination of an area over the course of one or two years yields a very different process than an examination over the course of a decade or several decades. Likewise, examining an area using a monthly interval yields dynamic relationships that are not observable using a coarser time interval such as five or ten years. ■
The large things are the ones that have the most profound effect on humans
While small-scale phenomena frequently lend themselves to easier data collection, it is often larger-scale phenomena that attract major interest among global change researchers. Such large-scale phenomena include global temperature changes, acid rain, tropical deforestation, carbon sequestration, and regional-global species diversity. It is also the case that large-scale government policies have an impact over the opportunities and constraints faced by many peoples. Current patterns of extending markets to a global extent are the result of major international treaties. Similarly, restrictions on the trade of rare and endangered species have come about through national legislation as well as international treaties. For all the importance of large scale phenomena, there are many small-scale phenomena that also impact large numbers of people. The tragic consequence of the spread of the HIV virus is but one example of how many small-scale processes add up to large scale disasters. ■
There is a feeling that we should be able to use our knowledge of small things to predict and manage these large-scale phenomena
Because of the difficulty and expense of collecting detailed and compre-hensive data at the regional and global scales, researchers often rely on finer scale data to characterize social-biophysical relationships. Much of the research in human ecology, an arena where social-biophysical relationships have been studied from a systems perspective, has concentrated at local-scale relationships. Examples
SCALING IN INTEGRATED ASSESSMENT 81
include fallow periods in swidden agricultural systems, nutrient loss, soil erosion and other forms of environmental degradation. These observations at the local scale have widely been used to inform policy makers whose decisions affect regional and global change. Yet it is unclear to what extent local scale phenomena scale up to regional and global scale affects. Can the behavior of individuals modeled at a local scale be represented at a higher scale? Can the outcomes of the behavior of those individuals modeled at the local scale be adequately depicted by a higher scale represen-tation? Many economists would argue that a macro-scale representation can adequately represent the function of system that is composed of a set of individuals. But in terms of policy prescriptions it is important to understand what the impact of high-scale (e.g., national or regional) policies will be at lower scales. The impact of a specific policy prescription in a region with a highly homogenous ethnic composition can be quite different than the impact of that policy prescription in a region with a highly heterogeneous ethnic composition. ■
Although the small things are easier to study and understand, they are more numerous
Social data collection is expensive and time consuming whether the data is household data, demographic data, or agriculture prices from regional co-ops. While remote sensing provides the ability to characterize the landcover or meteorologic conditions of an area (at specific scales), no such method exists to rapidly assess the condition of human systems across broad spatial extents at any scale. The resources necessary to collect full data at the finest scale possible for a large spatial extent are beyond the capabilities of researchers and governments alike. Even mammoth efforts such as the U.S. Census make compromises in data collection. While every household is included in the survey, there are two different forms for data collection. A short form for the complete census and a longer form sent to approximately one-sixth of all households provides still further information for a subsample of the population. Beyond this, there are households and individuals missed by the census, such as migrant and transient populations. In addition, the questions included in national level censuses are generally broad and not focused on a specific research question. Therefore, researchers interested in focused areas of research understandably limit the spatial extent of their research and focus on specific scales of analysis. These compromises allow researchers to examine relationships that would otherwise be undiscovered or poorly understood. Yet the compromises in research design limit the researchers ability to fully document and characterize the nature of the relationships at work. ■
The large scale is likely to have at least some characteristics we cannot predict at all from a knowledge of the small scale.
One of the major intellectual breakthroughs of the 18th Century was the work of Adam Smith and his recognition that studying a single firm was not
82 SCALING ISSUES IN THE SOCIAL SCIENCES
sufficient to understand the consequences of exchange among a variety of firms in an open competitive market. Thus, most of modern economics is based on a study of the competitive dynamics among many firms rather than the internal organization and decision making of a single firm. All processes that involve some levels of competition are likely to generate phenomena at a larger scale that is not fully predictable from a focus strictly on the smaller scale. ■
The small scale is likely to have at least some characteristics we cannot predict at all from a knowledge of the large scale.
Similarly, examining some data or processes at a large scale removes considerable variation in what is happening at a smaller scale. For example, it is frequently thought that population change leads to rapid deforestation. For countries as a whole, population density does appear to be related to the amount of forested land remaining in the country. At a micro-level, however, many studies have shown that increases in population either do not affect the extent and composition of forests in a smaller region or actually lead to an enhancement [7, 8]. Thus, while some areas are adversely affected by increases in population located nearby, or in other regions of a country, other areas are able to use an increase in population to invest more labor in protecting a forest. In addition, large scale processes and relationships mask the variability that exists at smaller scales. While an overall population growth rate can be determined for an entire region, there are households with both low and high fertility within that region. The household dynamics are what will inform the researcher about what the factor are contributing to the overall level of fertility, whether they be income, education or access to contraception. ■
Scaling-up is not part of our scientific tradition
Despite the acknowledgement in both the social and physical sciences of the importance of scale effects, most theoretical progress has been made while disregarding this importance. For example, the theories about demographic transition, agricultural intensification [9], collective choice theory [10] are major contributions but do not acknowledge the operation of these theories across scales. Hierarchy theory arguably comes the closest to a conceptual framework for addressing scale issues, but this theory is not widely applied and methods of addressing this theory in research are lacking.
Scale Issues in Social Science Disciplines With this context for scale issues established, we now turn to a discussion of how scale has been approached from different social science discipline drawing on Gibson et al. [3]. The content of this earlier work is adapted and modified in the following section. This discussion demonstrates that that many social science disciplines are cognizant of scale related problems in data collection and data representation. The different approaches that different
SCALING IN INTEGRATED ASSESSMENT 83
disciplines adopt is partly a product of the nature of the data associated with a particular discipline and partly related to the inter-disciplinary nature of certain disciplines and the ability for ideas and techniques to cross-fertilize. Scale issues in geography A major focus of geographers is to describe and explain spatial patterns. Depending on what in a space matters to particular researchers, geography is divided into subdisciplines that parallel most of the major disciplines across natural and social sciences, e.g., physical geography includes geomorphology, biogeography, and climatology; human geography includes economic, political, and urban geography. Geographers gain their disciplinary identity is by their explicit consideration of spatial relationships. Spatial scales are thus critically important in this discipline, and span in their extent from “a single point to the entire globe” [11]. As geographers have addressed more questions related to global change, they have also been increasingly aware of linkage between spatial and temporal scales. The choice of extent and resolution that conveys relevant information most efficiently has always been the central problem of topography. Discussions of the problem of scale in a more methodological and abstract fashion did not start in physical and human geography until mid-century, when geomorphologists began to address the problem. Now, scale issues are found at the center of methodological discussions in both physical and human geography. Regional scales were used prominently during the first half of the twentieth century until new research technologies, combined with a need for a more scientific mode of explanation, led to more microlevel studies. Until recently, most geographic studies gathered data at a microlevel for the purpose, however, of contributing to larger geographic domains. Given an increasing interest in global phenomena, however, geographic studies are shifting more meso- and macroscale studies [12]. Like ecologists, geographers have found that the consideration of scale problems is fundamental to the identification of patterns and their explanation. In spite of the ongoing debate on the appropriate scale on which geographic processes should be analyzed, a widespread agreement exists that explanatory variables for a given phenomenon change as the scale of analysis changes. Behavioral geographers examine the correlation between spatial and temporal scales in individual activities. Spatial scale, temporal scale, and the degree of routinization are highly correlated in many human activities. Patterns that appear to be ordered at one level may appear random at another.1 For 1
Human migration is a phenomena that may occur at different spatial scales: within an urban area, within a region, within a nation, or across national boundaries. The patterns of intraurban migration are related to individual-level variables such as age, education, and individual family income. Intrastate migration, on the other hand, is explained mainly by aggregate variables such as “labor demand, investment, business
84 SCALING ISSUES IN THE SOCIAL SCIENCES
example, shoe stores show clumping patterns to attract more customers, but each store in a clump tries to place itself as far as possible from the others [11]. When the generalization of propositions is made across scales and levels in geography, it can result in the common inferential fallacies. These erroneous inferences have often been attributed to poor theory. In fact, they often reflect lack of data, or the limits in gathering data at multiple levels. Meentemeyer [11] suggests using data-rich, higher-level variables as theoretical constraints on lower-level processes to help predict lower-level phenomenon. The issues posed by the growing interest in globalized phenomena have led some human geographers to discuss new types of scaling issues. In postmodern interpretations of globalization, human geographers assert that the scale of the relationship between the dimension and object is important. Three types of scales involve different relationships: absolute, relative, and conceptual. An absolute scale exists independently of the objects or processes being studied. Conventional cartography, remote sensing, and the mapping sciences use absolute spatial scales, usually based on a grid system, to define an object’s location and to measure its size. An advantage of using absolute scales is that hierarchical systems can easily be created when a larger (or longer) entity contains several smaller (shorter) ones (e.g., Nation-City-DistrictNeighborhood; Century-Decade-Year-Month-Week). Geographers have paid increasing attention to relative space as they try to conceptualize the processes and mechanisms in space rather than the space itself. Relative scales are defined by, rather than define, the objects and processes under study.2 A relative concept of space regards space as “a positional quality of the world of material objects or events,” while an absolute concept of space is a “container of all material objects” [13, 14].3
climate, and income” [11: p165]. If the spatial scale or level is fixed, variables may also change according to a temporal scale. For example, different variables related to patterns of precipitation in and around mountains vary over temporal levels of hours, days, and years [11: p166]. 2 Jammer [13] first contrasted absolute and relative concepts of space in his review of the history of the concept of space in physics. In fact, the absolute concept of space is a rather modern development that accompanied Newtonian physics in which relations of objects were represented in absolute terms [14]. 3 The classical reference for geographers, [14], starts with the psychological, cultural, and the philosophical problems of understanding the concept of space, which he then connects with issues of measurement and spatial representation. For Harvey, a central question is “how concepts of space arise and how such concepts become sufficiently explicit for full formal representation to be possible” [14: p192]. The early geographers relied more on Kant and Newton and thus on absolute scales. The construction of noneuclidean geometry in the nineteenth century and the development of Einstein’s theory of relativity challenged the absolute concept of space. Since the mid-twentieth century, geographers have included more measures of relative space in their studies.
SCALING IN INTEGRATED ASSESSMENT 85
Relative space is particularly important in studies of behavioral geography that focus on individual perception of space. When we need to measure distance in terms of the time and energy needed for an organism to change its position from one place to another, absolute distance rarely corresponds with the relative distance. The plasticity of space is represented by the work of Forer [15] who examined both the time and the net distance that it took to reach diverse locations within New Zealand in 1947 as compared to 1970 after growth in the airline network. Finally, in addition to spatial denotations, geographers also use terms like global and local scale to stress conceptual levels. Global and local may correspond to the conceptual levels of “totality, comprehensives” and “particularity, discreteness, contextuality” [12]. As a spatial scale also implies a temporal scale in physical geography, so too does space link with conceptual scale in human geography. Scale issues in economics Economics has developed two distinct types of theories – microanalytic and macroanalytic. Microtheories tend to examine the incentives faced by producers, distributors, retailers, and consumers as they are embedded in diverse market structures. Macroeconomists study large-scale economic phenomena, such as how various economic forces affect the rate of savings and investment at a national level. Few economists attempt to link these two distinct levels of theory. Recently, Partha Dasgupta [16] addressed a concern with the problem of linking across spatial and temporal scales within economic theory. Dasgupta suggests that economics at its core tries to explain “the various pathways through which millions of decisions made by individual human beings can give rise to emergent features of communities and societies” [16: p1]. By emergent features he means “such items as the rate of inflation, productivity gains, level of national income, prices, stocks of various types of capital, cultural values, and social norms” [16: p1]. He points out that individual decisions at any particular time period are affected by these emergent features (which themselves result from very recent individual decisions). Some of the emergent features are fast-moving variables (e.g., changes in national income and rate of inflation) and some are slow-moving variables (e.g., changes in cultural values, institutions, and norms). When economists have studied short periods of time, they have simplified their analyses by taking slow-moving variables as exogenous and focused on the fast-moving variables. This has been a successful strategy for many economic questions, but Dasgupta [16] points to the repeated findings in ecology, on the other hand, that it is the interface between fast- and slow-moving variables that produces many important phenomena.
Here, space does not exist by itself but “only with reference to things and processes” [11: p164].
86 SCALING ISSUES IN THE SOCIAL SCIENCES
Scale is most overtly addressed by microeconomists interested in the question of economies of scale and optimization problems. Economies of scale refer to the phenomena in which an increase of inputs within some range results in more or less than proportional increase of outputs [17]. The quantity or magnitude of objects in both the input and output streams of a productive process represents certain levels of the process. Many propositions found in economics are expressed in terms of the relationship between the level of inputs and outputs, followed by suggestions on how to make decisions that optimize results. The law of diminishing returns refers to the diminishing amount of extra output that results when the quantity of an input factor is successively increased (while other factors are fixed). The law of increasing costs refers to the ever-increasing amount of the other good that tend to be sacrificed in order to get equal extra amount of one good [17: p25–29]. The optimal combination of inputs is a combination of input factors that minimizes the cost of a given amount of output and is achieved by equalizing marginal productivity of every input factor. The optimum population for a society is the size of population that maximizes per capita income for given resources and technology of the society [18]. The issue of generalizability is also studied in microeconomic theory. Paul Krugman [19] examines the generalizability of theoretical propositions developed at one scale of interactions to another. Theories based on competitive markets are not useful when attempting to explain the structure and behavior of firms under the conditions of monopoly and less than perfect competition. Scale issues in ecological economics Ecological economists study economic phenomena using a broader perspective than traditional economics by overtly incorporating ecological processes. Many ecological economists reject the myopic and human-centered viewpoint of mainstream neoclassical economics. They also differ with environmental economics in that the latter is seen merely as an application of neoclassical economics to environmental issues. Instead, ecological economists adopt a broader and more holistic analytical scale: conceptually larger in spatial scale and longer in terms of temporal scale [20]. Ecological economists criticize the “methodological individualism” of neoclassical economics as the theoretical expression of myopic economic thinking that treats the ecological environment only as an exogenous constraint on human economic activity. And they argue that this narrow scale of economic analysis is responsible for the disturbances of ecosystems and the overexploitation of natural resources that destroy the foundations of human existence. The quantitative dimension of economic objects is also an important scale issue in ecological economics. Ecological economists’ discussion of scale centers on “the physical volume of the throughput” [20] or “the physical dimensions of the economy relative to the ecosystem” [21]. They take the ecosystem as a relatively fixed entity and argue that the economy grows by exploiting the ecosystem. This approach shifts the focus of economic study
SCALING IN INTEGRATED ASSESSMENT 87
from economies of scale to the scale of the economy, i.e., the scale of “all enterprises and households in the economy” [21]. Ecological economists argue that the scale of economy should not be reduced to allocation analysis but should be addressed at the outset as a constraint on human economic activity – something that should not be determined by the price system but by a social decision that would take into account sustainability. Scale issues in urban studies In urban studies, the primary dimension of scale used is population. Scale or size of a city, unless otherwise specified, is equated to the number of people living within a given territory. Urban researchers also use alternative measures of scale such as a city’s active labor force, number of households, value added in production process within the territory, and spatial area [22]. The problem of optimal city size is central to urban studies, and is reflected in a variety of secondary research topics such as the planning of new cities, limiting the growth of existing cities, rebuilding destroyed or deteriorated cities, dispersal of cities as a measure of civilian defense, deconcentration of urban populations, and controlling the location of industry. These topics, in turn, depend on different optimization problems, such as the optimum population of a nation, the optimum ratio of urban to rural population, the optimum pattern of different sized cities, the optimum size of a principal city as the service center for its tributary region, the optimum size of residential units, and the optimum sizes of particular cities or of cities of special types [23]. While at first glance these approaches appear straightforward, urban researchers wrestle with a great deal of complexity, and extensive controversy exists concerning the mensuration and optimization of these phenomena. Urban researchers addressed the issue of optimal city size most intensively and broadly in the 1970s [24], often posed as “the problem of determining the optimal spatial distribution and hierarchy of cities of different sizes” that maximizes per capita income. Urban researchers also consider noneconomic, but no less significant, factors in their models of optimum city size, including the physical layout (accessibility to the countryside), health, public safety, education, communication, recreation, churches and voluntary associations, family life, and psychosocial characteristics. Researchers have found no general relationship between the size of city and these desired conditions [23]. Scale issues in sociology While scaling issues have always been implicit in sociology, the publication of Charles Tilly’s book in 1984, Big Structures, Large Processes, Huge Comparisons [25] put the importance of explicitly dealing with scale squarely on sociologists’ agenda. Tilly criticizes many aspects of traditional sociological theories because they address social processes in abstraction, without specifying temporal or spatial limits. His method is to specify the scale of
88 SCALING ISSUES IN THE SOCIAL SCIENCES
analysis first and then to find fundamental processes and structures within that scale (or, in our terms, level). The implication of his work is that multiple processes exist and some are more fundamental than others for a given level of spatial and temporal scales. For example, he argues that from the fifteenth through the nineteenth centuries in the Western world, the forms of production and coercion associated with the development of capitalism and nation states “dominated all other social processes and shaped all social structure” [25] including urbanization and migration. For Tilly, the proper problem of studying historical processes should start with “locating times, places, and people within those two master processes and working out the logics of the processes” [25]. If one were to accept his argument for the study of integrated assessment modeling, one would start by (1) defining the question of which temporal and spatial scale is crucial in affecting a particular environmental change process; (2) identifying fundamental processes (such as commercialization, industrialization, or population growth) that drive the process; (3) examining how these fundamental processes relate to one another; and (4) addressing how systematic, large-scale comparison would help us understand the structure and processes involved. Tilly’s [25] work also focuses on the concept of the levels of analysis – a higher level corresponds to a larger temporal and spatial scale. He argues that the crucial structures and processes vary as one changes the level of analysis. While he indicates that the number of levels between the history of a particular social relationship and the history of the world system is an arbitrary number, he proposes four levels as being useful: (1) at world-historical level, the rise and fall of empires, interaction of world systems, and changes in the mode of production are the relevant processes to investigate; (2) at world-system level, the world system itself and its main components, such as big networks of coercion and exchange, are the foci of analysis; (3) at macrohistorical level, major structure and processes of interest to historians and social scientists such as proletarianization, urbanization, capital accumulation, and bureaucratization become effective foci of investigation; and (4) at microhistorical level, the task is to make a linkage between the historical processes and the experience of individuals and groups. Coleman [26] also directly addresses the problem of analyzing multilevel social systems. Coleman critiques Weber’s [27] argument in “The Protestant Ethic and the Spirit of Capitalism” for using macrophenomena at one level to explain other macrophenomena at the same level. By ignoring lower-level phenomena, Weber [27] (and others who follow this method) omit how lowerlevel phenomena react to macrolevel phenomena, and then may act to change it. For Weber’s argument, this would mean that new religious doctrines affect the values of individuals, leading to changed values about economic phenomena, new patterns of interaction between individuals, and finally, a new economic system.
SCALING IN INTEGRATED ASSESSMENT 89
Scale issues in political science and political economy As in other sciences, scales and levels divide political science into different subdisciplines. Many political scientists focus on the actions and outcomes of aggregated units of government operating at different geographical levels: local, regional, national, and international. Levels of human aggregation also affect what political scientists study: much research concerns the political behavior of individuals (especially voting); another features the politics of groups, particularly political parties and interest groups. Most research undertaken by political scientists, however, tends to focus directly on a particular level of primary interest to the scholar without much attention to how the phenomena at that level is linked to phenomena at a higher or lower level. Two exceptions worth noting are the study of federalism, which is at its heart a theory of multilevel, linked relationships, and the Institutional Analysis and Development (IAD) framework, developed by colleagues associated with the Workshop in Political Theory and Policy Analysis at Indiana University, which focuses on nested levels of rules and arenas for choice. Although the concept of scale within the subdisciplines of political science is rarely addressed explicitly, some of the most important substantive and methodological issues addressed by political scientists relate essentially to problems of scale and level – especially the number of individuals involved. One important discussion regarding democracy concerns the differences of scale and level between the image of the original, small Greek city-states and the conditions of large, modern nation-states. In a major study of this question, Robert Dahl [28] concludes that there are major consequences of increases in the size of democratic polities, including limited participation, increased diversity in the factors relevant to political life, and increased conflict. Sartori [29] argues that democracy is still possible because competition among politicians for election and re-election more or less guarantees their responsiveness to citizens. Vincent Ostrom [30, 31], who is more cautious, sees modern democracies as being highly vulnerable precisely because of problems related to the scale of interaction among citizens.4 And Benjamin Barber [32] fears that the technocratic and bureaucratic orientations of monolithic multinational corporations seriously challenge the access of citizens to information and participation in effective decision making. Scholars in political economy, public choice, or social choice focus on the relationship between individual and group preferences, with scale and level issues at its core. The path-breaking work of Kenneth Arrow [33], which has 4
The competition for electoral office may be reduced to a media war that trivializes the discussion of public policy issues rather than clarifying important issues. Without a strong federal system and an open public economy, both of which allow for substantial self-organized provision of problem-solving capabilities, Ostrom [30, 31] views contemporary state-centered democratic systems as losing the support of their citizens, fostering rent-seeking behavior, and losing capabilities to deal with major public problems.
90 SCALING ISSUES IN THE SOCIAL SCIENCES
been followed by several thousand articles on what is now referred to as social choice theory (for a review, see Enelow [34]), proved that it was impossible to scale up from all individual preference functions to produce a group preference or “general will” or “public interest” function that satisfied what appeared to be an essential set of axioms of desirable properties of an aggregation process. Plott [35] demonstrated that when there were more than two dimensions involved in a policy choice, majority rule rarely generated a single equilibrium except when the preferences of individual members were balanced in a particularly optimal, but unlikely, manner. McKelvey [36] and Schofield [37] proved that an agenda could be constructed to include every potential outcome as a majority winner unless there was a single outcome that dominated all others. These “impossibility theorems,” combined with Arrow’s earlier impossibility theorem, have deeply challenged the core presumption that simple majority rule institutions are sufficient to translate citizen preferences into public decisions that are viewed as representative, fair, and legitimate.5 Like the Arrow paradox, the theory of collective action has also demonstrated a fundamental discontinuity between rationality at the individual and group level in the face of a social dilemma.6 Olson [10] and hundreds after him have explored the ramifications that in social dilemmas, group outcomes are worse when individuals choose their own best strategies. The relationship between scale, government, and the delivery of public goods and services has also been an important part of political science. This tradition of work starts with an awareness of market failure in regard to the provision of public goods and services. If free riding leads to an underprovision of a good through voluntary arrangements, some form of governmental provision will be necessary. Different configurations of governments may be more efficient and responsive depending upon the nature of the goods and services in question [42, 43, 44]. The work of scholars focusing on local public economies has tried to understand how local units of government cooperate on the provision and production of some goods and services while competing with one another with regard to others [45, 46]. The approach is similar to that of ecologists who study the patterns of interactions among a large number of organized units within a spatial terrain and discover emergent properties resulting from the way that individual units work together. Scholars 5
Kenneth Shepsle [38, 39] has shown how diverse kinds of institutional rules – including the allocation of particular types of decisions to committees within a legislative body – do lead to equilibria that can be thought of as institutionally induced equilibria. 6 The term “social dilemma” refers to an extremely large number of settings in which individuals make independent choices in an interdependent situation with at least one other person and in which individual incentives lead to suboptimal outcomes from the perspective of the group [40, 41]. The reason that such situations are dilemmas is that there is at least one outcome that yields higher returns for all participants, but rational participants making independent choices are predicted not to achieve this outcome. Thus, there is a conflict between individual rationality and optimal outcomes for a group.
SCALING IN INTEGRATED ASSESSMENT 91
have found that in many cases a multilevel, polycentric system is more efficient than one large, metropolitan-wide governmental unit or only a single layer of smaller units [47, 48]. In addition to recognizing that governmental units operating at diverse spatial levels are potentially more efficient than any single-unit operation at one level could achieve, scholars in this tradition have also recognized that there are several conceptual levels involved in any governance system. At an operational level, individuals engage in a wide diversity of activities directly impacting on the world, such as the transformation of raw materials into finished goods. There is a set of operational rules that provides structure for these day-to-day decisions made by government officials and citizens interacting in a wide diversity of operational situations (teachers in a classroom with students; welfare workers processing applications of those seeking welfare benefits; police giving a ticket to a speeding driver). These operational rules are the result of decisions made in a collective-choice arena. The structure of that collective-choice arena is itself affected by a set of collective-choice rules that specify who is eligible to make policy decisions, what aggregation rule will be used in making these decisions, and how information and payoffs will be distributed in these processes. At a still different conceptual level, collective-choice rules are the outcome of decisions made in constitutional arenas structured by constitutional rules [47, 49, 50]. Contrary to many presumptions that constitutional rules are made once and only at a national level, the constitution of all organized structures – ranging from the household all the way to international regimes – may be updated by interpretation or self-conscious choice relatively frequently. Constitutional rules change more slowly than collective-choice rules which, in turn, change more slowly than operational rules. Rules that are genuinely constitutional in nature may be contained in any of a wide diversity of documents that do not have the name “constitution” attached to them. The constitution of many local units of government is embedded in diverse kinds of state laws. Similarly, collective-choice decisions may be made by a diversity of public units, such as city and county councils, local and state courts, and the representative bodies of special authorities, as well as by a variety of private organizations that frequently participate actively in local public economies – particularly in the provision of local social services. Operational choices are made by citizens and by public officials carrying out the policies made by diverse collective-choice arrangements in both public and private organizations. In order to understand the structure, processes, and outcomes of complex polycentric governance systems in a federal system, one needs to understand the conceptual levels of decision making ranging from constitutional choice, through collective choice, to operational choices. The relationship of these conceptual and spatial levels is illustrated in Table 5.1, where the conceptual levels are shown as the columns of a matrix while the spatial levels are shown as the rows. The particular focus on
92 SCALING ISSUES IN THE SOCIAL SCIENCES
operational activities in this table relates to the use of land and forest resources – but almost any other type of CPR or public good could be used instead. Given the importance of international institutions in this realm of activities, as well as the decisions made by households, the geographic domains are arrayed at five levels. This, of course, is an oversimplified view, as there may be several geographic domains covered by community governance units as well as several at a regional level. Table 5.1: The relationships of analytical levels of human choice and geographic domains Spatial levels of political jurisdictions International
Constitutional-choice level
Collective-choice level
Operational-choice level
International treaties and charters and their interpretation National constitutions and their interpretation as well as the rules used by national legislatures and courts to organize their internal decision-making procedures State or provincial constitutions and charters of interstate bodies
Policy making by international agencies and multinational firms Policy making by national legislatures, executives, courts, commercial firms (who engage in interstate commerce), and NGOs
Community
County, city, or village charters or organic state legislation
Policy making by county, city, village authorities and local private firms and NGOs
Household
Marriage contract embedded in a shared understanding of who is in a family and what responsibilities and duties of members are
Policies made by different members of a family responsible for a sphere of action
Managing and supervising projects funded by agencies Buying and selling land and forest products, managing public property, building infrastructure, providing services monitoring and sanctioning Buying and selling land and forest products, managing public property, building infrastructure, providing services monitoring and sanctioning Buying and selling land and forest products, managing public property, building infrastructure, providing services monitoring and sanctioning Buying and selling land and forest products, managing public property, building infrastructure, providing services monitoring and sanctioning
National
Regional
Policy making by state or provincial legislatures, courts, executives, and commercial firms and NGOs with a regional focus
One can well expect different types of political behavior as one goes across rows or columns of this matrix. Paul Peterson [51], for example, argues that since local governments are under the condition of mutual competition, they pursue more developmental and allocative policies than redistributive policies. If they pursue redistributive policies too vigorously, both corporations and private citizens will move to other local governments that do not tax wealthier taxpayers for services delivered primarily to poorer residents. This suggests that redistributive policies will be pursued more often and more successfully at the national level.
SCALING IN INTEGRATED ASSESSMENT 93
Similar phenomena have evolved during the past two decades in regard to various kinds of environmental policies. Environmentalists seek to engage some policy questions at a strictly local level, some at a regional or national level, and still others within international regimes. At the international level, they may gain considerable public attention, but end up with written agreements that are poorly enforced. At a local or regional level, they may achieve a large number of quite different, but more enforceable agreements. Trying to understand the impact of dealing with diverse “global change phenomena” at diverse levels of organization will be one of the central tasks of institutional theorists studying global change processes.
Scale, Social Science Data Collection, Representation and Analysis Social science data is most frequently collected at one of the follow levels: individual, household, neighborhood, urban jurisdiction (e.g., cities or counties), larger political jurisdictions (e.g., states or regions of a country), or for a nation. Recently, more studies have involved data collection for more than a single country and more than a single time period. The levels at which data are collected and aggregated most frequently may not be useful to the research question at hand. This is particularly the case with secondary data such as census data and national summary data. For many purposes where the process under study does not conform to the levels described above, data must be scaled-up or down which involves the introduction of error to the analysis. Further, the operational processes in a smaller geographic domain may be simultaneously affected by several levels of analytical processes in that same domain as well as in larger domains (Table 5.1). And, as social scientists begin to address policy issues related to ecological processes, the problems of aggregating data to fit the processes under study becomes ever more important. Earlier methods of data collection and analysis frequently are not sufficient for the major environmental questions being addressed currently. Several recent studies conducted in Indiana as well as in Nepal [52, 53] have shown the usefulness of creating institutional landscapes that conform to the governance and management units of those responsible for forest resources in a particular geographic region. To understand forest change over time, for example, it is necessary to understand the legal rules that affect forests that are owned by national as well as by state governments [54], but also the internal policies adopted by a forest owner (whether a government or an individual person) toward specific stands of forests. Rarely do forests conform to any of the levels identified above. Most state and Federal forests in the U.S. cross county borders and frequently include portions of several cities. In his study of state and Federal forests, Schweik [55] identified new geographic units that represented the institutional landscapes relevant to forest property manager’s operational decision-making and activities (as a result of a study of the collective choice and constitutional levels of choice
94 SCALING ISSUES IN THE SOCIAL SCIENCES
affecting these operational level activities). He was then able to identify the institutional incentives they faced and to map the relevant geographic domains of diverse forest-stand policies and how these changed over time. Schweik then used Spectral Mixture Analysis to convert the raw digital numbers that MSS images provide to at-surface reflectance values for three time periods. This enabled Schweik to trace changes in management practices (e.g., opening areas for recreation, changing timber harvesting practices, restricting all harvesting activities) as measured by changes in the reflectance values. By conducting this kind of multi-scale analysis (from the pixel, to a stand, to a forest owner, to a region), Schweik was able to show that many of the forest stands owned by both state and national governments were showing substantial patterns of regrowth over the past twenty years, but that the difference in collective choice rules governing the two types of government-owned forests could be detected in the spectra. The stronger collective choice mandate facing state foresters to generate income from state land can be detected when comparing the spectra from federal and state forests over time. Recent studies using dynamic modeling techniques, have also enabled scholars to address problems of spatial misperceptions as they affect public policy. Wilson et al. [56] examine the domain of regulatory actions related to inshore fisheries and ask whether the spatial extent of the regulation is appropriate given the spatial differentiation in the ecological processes affecting fishery dynamics. The presumption underlying much of contemporary policy is that the domain of regulation should be as large as possible for any resource where there is some movement among local ecological niches. In this view, all members of the same species are part of a panmictic population and harvesting practices adopted in one location will eventually affect and be affected by practices adopted in other locations. If the population is indeed panmictic, then regulation at the largest level is indeed appropriate. Gilpin [57] and others have argued, however, that many fisheries are characterized by metapopulations where local populations are relative discrete. When a species is appropriately characterized as a metapopulation, a local extinction may not be re-colonized by other fish and regulation that does not take into account smaller-scale processes may lead to an unintended collapse of key segments of the larger population. By using a series of dynamic models, Wilson et al. [56], are able to identify when having regulatory regimes at a smaller level (complemented by more limited regulation at a larger level) leads to greater sustainability of a fishery. In particular, the level of variability that occurs within and across sub-systems affects the likelihood that a regulatory system organized at too large a scale will lead to extinctions of local populations and a consequent overall reduction in the sustainability of the fishery. Entwisle et al. [58] used the integration of community-level sociodemographic data and remotely sensed imagery to explore the relationship between demographic factors (fertility, migration) and the rate of deforestation in Northeast Thailand. Deforestation in Northeast Thailand is the product of
SCALING IN INTEGRATED ASSESSMENT 95
household level decisions made within the context of community-level institutions such as those rules imposed on a community by a village headman and group of village elders7. In order to enable this linkage, the social unit of analysis, a community, was linked to the landscape by creating a spatial partition around each community representing the area affected by the socioeconomic composition of the community. While communities did have distinct administrative boundaries associated with them, community-level land tenure patterns did not coincide with these boundaries making the administrative boundaries alone inadequate to capture the spatial extent affected by households in a particular village. In an extension of this work, Walsh et al. [4] explore how the relationship between demographic factors and forest cover changes as a function of scale and specifically the cell size used for spatial data representation. Using a set of different cell resolutions and tests of significance, it was found that the statistical results (e.g., sex-ratio and landcover composition) changed as a function of data aggregation and the cell resolution used to represent those data. In other words, the scale at which data are collected and represented affect what relationships are found between variables in subsequent analysis. This study is one of many that demonstrates the importance of not relying on a single scale of analysis. Perhaps even more fundamentally, a researcher must determine if a specific level of analysis is of relatively little importance in a social-biophysical system. For example, in some areas community-level institutions have relatively little affect on the way land is managed and household dynamics are far more important. In this case a model of landcover change may appropriately exclude a specific community-level component while still adequately capturing the key social-biophysical interactions. The difficulty lies in determining the relative importance of different levels prior to data collection. Scientific research at different scales of levels of analysis invariably yields different findings [4]. This disparity is in part due to the scale dependence of certain relationships but also the availability and representation of data at different scales. For example, Wood and Skole [59] completed a large study of deforestation in the Brazilian Amazon to examine the factors related to the rate of deforestation. This research relied on regional level census data and remotely sensed imagery and found that population density was a major factor related to deforestation rates. This regional level analysis is critical in determining the rate of deforestation over a broad spatial extent. However,
7
The study area of Nang Rong is characterized by a nuclear village settlement pattern where households are aggregated in a common area and land holdings are dispersed around the village area. The administrative areas around a village are comprised of private landholdings and community land whose management is controlled by village headmen and village elders.
96 SCALING ISSUES IN THE SOCIAL SCIENCES
the findings relating social variables to deforestation are limited to the variables available in the census data. Household level analyses of the Brazilian Amazon show a more complex set of relationships in relating socio-economic and biophysical factors to deforestation [60, 61, 62]. For example, access to credit, wage labor availability, distance to roads, topography, and soil characteristics have all been shown to be important factors at this household level of analysis. These more complex relationships are apparent because of the use of a household level survey to address the specific question of deforestation, and so this distinction between household and regional-level analysis is more a question of data availability. Yet researchers developing IA models need data to calibrate and validate their models. Furthermore, IA models typically operate at a single scale of analysis. Data availability issues might lead modelers to develop models at specific scales of analysis not because that is the proper scale at which the system should be modeled but because of the scale at which validation data are available.
Social Sciences Methods addressing Scale Much of the progress made towards understanding the nature of scale has been made in the physical sciences [3]. However, these methods developed in fields such as ecology and hydrology do not necessarily address the particular problems scale effects introduce in social science research. A variety of methods have been developed in fields such as geography, epidemiology and sociology that are well suited to addressing scale questions related to social data. Contextual analysis One of the interesting scale-related questions that political scientists face is whether there are “neighborhood” affects on individual political behavior. For example, is it the case that individuals who start out with a Democratic party identification continue to vote for Democratic Party candidates in all elections over time when they are living in a neighborhood that is predominantly Republican as contrasted to predominantly Democratic? In other words, what is the effect of the context at a neighborhood level on the voting behavior of individuals with particular political orientations. This is a question that has now been addressed by the development of a sophisticated form of data analysis referred to as contextual analysis [63, 64]. Recently scholars interested in educational performance have used contextual analysis to address questions related to the impact of classroom composition on individual student performance. Again, the question is phrased whether students who come into a classroom with an initial score on a standardized test progress more rapidly or more slowly depending on the test scores of others in the classroom or other individual attributes of students aggregated up to the classroom level. In all forms of contextual analysis, the hypothesis is that
SCALING IN INTEGRATED ASSESSMENT 97
the aggregation of individual characteristics that make up a relevant group affect the impact of individual characteristics on individual behavior. Multi-level modeling Socio-economic data that are collected at the individual, household and community levels are in turn aggregated to regional and then national levels of data representation. Scientists have recognized the importance of relationships that operate at only specific scales. For example, land management decisions are commonly made at the household level in the context of a regional economy. Individual migrant decisions are made by individuals in the context of a household and region. The importance of these multiple levels has been managed methodologically using multi-level modeling, where variables from multiple levels are used in empirical analyses. Much of this multi-level modeling research grew from the health sciences. In particular, epidemiologists would use multi-level modeling to look at both household and neighborhood characteristics as risk factors. Hierarchy theory One theory particularly relevant to scaling both social and biophysical data is Hierarchy theory [65, 66]. The central idea of hierarchy theory is that to understand any complex system depends on understanding the constraints at higher and lower levels of spatial-temporal resolution. The levels immediately above and below the referent level provide environmental constraints and produce a constraint ‘envelope’ in which the process or phenomenon must remain [67, 68]. For example, households are a common unit of social data analysis and in many environments are the level at which land management decisions are made. Community level institutions and characteristics provide a context within which household decisions are made as with the example from Northeast Thailand [4, 58]. The difficulty in applying this theory is that the researcher must first decide what the bounding levels with constraints are, something difficult to do without the availability of multi-scale data. However, hierarchy theory comes closest to providing a conceptual framework within which scale issues can be explicitly addressed regarding the spatial representation of social and biophysical processes and the interactions between them. Modifiable areal unit problem A common method of spatial data integration is by using simple overlays of different polygonal units representing homogenous areas for particular variables. Overlaying polygonal units from different sources (e.g., census tract polygons and watersheds) often creates polygon intersections that can change the nature of the spatial data representation. These changes occur because of the method of the somewhat arbitrary method of delineating polygonal units. This problem, referred to as the Modifiable Areal Unit Problem in Geography
98 SCALING ISSUES IN THE SOCIAL SCIENCES
[69, 70] has been well documented and a variety of researchers have presented research suggesting various solutions to this problem (see for example Green and Flowerdew [71]). However, a universal solution has not yet been forthcoming and the modifiable areal unit problem addresses only one particular manifestation of scale effects. Scale, Social Science and Integrated Assessment Modeling A central problem in integrated assessment modeling and the calibration and validation of IA models is data availability. The lack of social data (longitudinal and cross-scale) precludes the ability to perform robust time series analysis, hindering the ability to look at the dynamic nature of social and biophysical processes. For example, county or state level price data are often available in rich time series and at multiple temporal resolutions (e.g., daily, monthly, yearly) but demographic data is more commonly available at much coarser temporal resolutions (e.g., decadal, five years). Some might argue that demographic indicators are less variable then economic indicators and thus the relative intervals for data collection is thus justified. However, while fertility and mortality rates are slow to change, very dramatic changes in inmigration and out-migration rates can occur within very short periods of time. These disparities between datasets from different sources are major obstacles to model calibration and validation and no solution to this problem is evident. A core issue related to the scale dependence of social data is the need to reconcile the difference between social units of observation and spatial units of analysis. GIS techniques present a variety of methods of transforming data from one spatial data representation to another allowing these different units to be reconciled. To a large extent, the pattern of land settlement determines what data transformations are necessary to make these social spatial linkages. For example, in Altamira, Brazil, parcels are organized in the widely cited fishbone pattern. Parcels are of roughly uniform size (500 ⫻ 2000m) and with the exception of recent isolated instances of land consolidation, each parcel is allocated to a single household. This spatial arrangement lends itself to a parcel level analysis as a one-to-one linkage can be made between the social unit of analysis and a spatial unit of analysis – the parcel. A similar arrangement exists in many areas in the Midwest United States. Here land was surveyed into parcels of regular dimensions and allocated to individual landholders in the early 1800’s. In contrast to Altamira, there has been a high degree of parcel fragmentation as households split parcels and land is transitioned from agricultural uses to residential uses. It is still possible to make a one-to-one linkage between households and a discrete spatial unit of observation, but the ability to conduct longitudinal or multi-temporal analysis is complicated by the fragmentation of parcels over time. In contrast to Altamira and the Midwest United States, Northeast Thailand presents a very different pattern of land settlement that dramatically affects
SCALING IN INTEGRATED ASSESSMENT 99
the feasibility of making a one-to-one social-spatial linkage. In Buriram Province on the Korat Plateau of Northeast Thailand, villages are organized into a nuclear pattern of land settlement. Households are concentrated into a central area and landholdings are distributed around this central village area. Complicating the ability to link households to discrete spatial partitions on the landscape is the fact that households typically have several landholdings that are distributed in different areas around the village. In the absence of digital or hardcopy maps showing landholdings linked to landholders, the effort necessary to spatially reference the landholding of all the households for a single village is tremendous. This type of linkage would allow researchers to relate household or individual level characteristics to outcomes on the landscape with a one-to-one relationship between the social unit of analysis (household) and the spatial unit of analysis (parcel). In terms of policy prescriptions it is important to understand the impact of those policies at the household level because different households may be affected differently by the same policy. But, the resources necessary to do this for even a small number of villages makes such a household-level linkage unfeasible for large spatial extents. In this situation data transformations may be used to scale-up social data from the household to the community level. For example, radial buffers can be created around communities allowing a spatial partition to be created within which community level characteristics can be linked to biophysical data characterizing this spatial partition [58]. Community level boundaries can be used to partition the landscape, acknowledging certain inconsistencies in these boundaries such as the overlap between adjacent communities. This scaling-up has two major affects. First, some variables do not lend themselves to aggregation. The mean age of males and females can be easily computed, but variables such as ethnicity or religious affiliation are more difficult to scale-up. In addition, this scaling-up necessitates the introduction of some heterogeneity within the social unit. In some cases this heterogeneity is minimal, but in other cases it can have dramatic affects on subsequent analysis. So what developments in modeling might allow researchers to produce integrated assessment models that are more suited to crossing-scales? Many models exist which do one or two of the following things well: 1) incorporate spatial interactions [56, 72], 2) incorporate dynamic relationships [72, 73], 3) model the human decision making process [74, 75]. The primary challenge facing researchers now is to develop spatially explicit models that elegantly handle dynamic relationships and human decision making [76]. One type of modeling that shows particular promise is agent-based modeling. Agent-based models explicitly allow interactions between model actors to be represented in a dynamic framework [77]. For example, an agent based model examining landcover change can be be populated with the following types of actors: 1) residential small-holders, 2) large holder agriculturalists, 3) land developers. Interactions within these agent groups and between these agent groups allow for a more realistic environment within
100 SCALING ISSUES IN THE SOCIAL SCIENCES
which decision-making can be modeled. Furthermore, agent-based approaches present a means whereby complex social interactions can be explored such as feedbacks in systems as a result of the transfer of information between agriculturalists or equilibrium states related to crop productivity and inputs. Currently, agent-based models that examine integrated systems are lacking. However, models exist that are approaching this functionality (for example the FLORES model [74] and improvements to the Patuxent ecosystem model [72, 78]) and this linkage between agent-based and spatially explicit approaches shows particular promise for IA models. This linkage involves the reconciliation between individual based models [79] and large-scale ecosystem approaches [78]. Such a reconciliation is at the core of scale issues in social science and IA modeling.
Conclusion An integrated assessment model ideally should incorporate data at multiple scales to calibrate and validate the model. It is possible to use a single scale to observe the impact of a single relationship (e.g., prices and land use, topography and deforestation), but there are likely other factors operating at other scales that are as important and the nature of relationships change across scales. It is well established that certain phenomena are observable at some scales while unobservable at others. Beyond this, the nature of relationships changes with scale, so that even if a relationship is observable at multiple scales, the magnitude or strength of that relationship may differ across scales. A multi-scale approach will provide a more complete understanding of a system than an analysis focusing on a single scale, but researchers must still determine the individual scales composing this multi-scale approach. What would be ideal would be an analysis not at a discrete set of scales but along a scale continuum, but this is clearly not possible due to data availability issues. The realities of research and modeling dictate that a multiscale analysis is not always feasible. In these situations it is the researcher’s task to understand the situations when scale dependent relationships may be present through an understanding of the social and biophysical systems under study. If a particular relationship is not evident at one scale, the researcher may explore other scales if there is a certain confidence that relationship exists at other scales or has a different characterization at other scales. Unfortunately this does not lend itself to a rapid appraisal of systems and the impact of scale dependence within those systems. What is clear from an examination of social science literature is that there is no consensus on how to deal with scale issues in the social sciences and by extension no evident answers in terms of integrated assessment modeling. What the existing literature does provide is evidence of when scale issues are important, to what degree scale issues are important in different situations and methods (albeit not universally accepted) of dealing with scale dependence. While a consensus surrounding scale effects is missing, new developments in
SCALING IN INTEGRATED ASSESSMENT 101
modeling present opportunities to explore spatially explicit and complex IA models that cross from the individual to the ecosystem scales. The spatially explicit nature of these new models will allow scale relationships in complex social-biophysical systems to be more easily explored.
References 1.
Quattrochi, D. A. and M. F. Goodchild, 1997. Scale in Remote Sensing and GIS, Boca Raton: CRC Lewis Publishers. 2. Bian, L., 1997. Multiscale Nature of Spatial Data in Scaling Up Environmental Models. Scale in Remote Sensing and GIS. In: D. A. Quattrochi and M. F. Goodchild (eds.). Scale in Remote Sensing and GIS. Boca Raton, FL: Lewis Publishers. 3. Gibson, C. C., E. Ostrom, and T. K. Ahn, 2000. “The Concept of Scale and the Human Dimensions of Global Change: A Survey.” Ecological Economics, 32(2): 217–39. 4. Walsh, S. J., T. P. Evans, W. Welsh, B. Entwisle, and R. Rindfuss, 1999. “Scale-Dependent Relationships between Population and Environment in Northeast Thailand.” Photogrammetric Engineering and Remote Sensing, 65(1): 97–105. 5. Evans, T. P., G. G. Green, and L. Carlson, 2000. Multi-scale Analysis of Landcover Composition and Landscape Management of Public and Private Lands in Indiana. In: A. Millington, S. J. Walsh, and P. Osborne (eds.). GIS and Remote Sensing Applications in Biogeography and Ecology. Kluwer Press. 6. Curran, P. J., G. M. Foody, and P. R. Van Gardingen, 1997. “Scalingup.” Scaling-up: From Cell to Landscape, P. R. Van Gardingen, G. M. Foody, and P. J. Curran (eds.), Cambridge University Press: Cambridge. 7. Fox, J., 1993. “Forest Resources in a Nepali Village in 1980 and 1990: The Positive Influence of Population Growth.” Mountain Research and Development, 13(1): 89–98. 8. Fairhead, J. and M. Leach, 1996. Misreading the African Landscape. Society and Ecology in a Forest-Savanna Mosaic. Cambridge: Cambridge University Press. 9. Boserup, E., 1966. The Conditions of Agricultural Growth. Chicago: Aldine Publishing. 10. Olson, M., 1965. The Logic of collective Action: Public Goods and the Theory of Groups, Cambridge, MA: Harvard University Press. 11. Meentemeyer, V., 1989. “Geographical perspectives of space, time, and scale.” Landscape Ecology, 3(3/4): 163–173. 12. Meyer, W. B., D. B. L. T. Gregory II, and P. F. McDowell, 1992. “The Local-Global Continuum.” In: R. Abler, M. G. Marcus, and
102 SCALING ISSUES IN THE SOCIAL SCIENCES
13. 14. 15. 16.
17.
18. 19.
20.
21.
22. 23. 24. 25. 26. 27.
28. 29. 30.
J. M. Olson (eds.). Geography’s Inner Worlds. New Brunswick, NJ: Rutgers University Press. Jammer, M., 1954. Concepts of Space, Cambridge, MA: Harvard University Press. Harvey, D., 1969. Explanation in Geography, New York: St. Martin’s Press. Forer, P., 1978. “A place for plastic space?” Progress in Human Geography, 2(2): 230–267. Dasgupta, P., 1997. Notes on Slow and Fast Variables in Economics. Notes for the Resilience Network, University of Florida, Department of Zoology, Gainesville. Samuelson, P. A., 1973. “A Diagrammatic Exposition of a Theory of Public Expenditure.” Review of Economics and Statistics, 37: 360–366. McConnell, C. R., 1969. Elementary Economics: Principles, Problems, and Policies Economics, New York: McGraw-Hill. Krugman, P., 1986. Industrial Organization and International Trade. Working Paper no. 1957. Cambridge, MA: National Bureau of Economic Research. Daly, H. E., 1992. “Allocation, Distribution, and Scale: Toward an Economics that is efficient, just and sustainable.” Ecological Economics, 6: 185–193. Foy, G. and H. Daly, 1992. “Allocation, Distribution and Scale as Determinants of Environmental Degradation: Case Studies of Haiti, El Salvador and Costa Rica.” In: Environmental Economics, A. Markandya and J. Richardson (eds.). New York: St. Martin’s Press: 297–315. Reiner, T. A. and J. B. Parr, 1980. “A note on the dimensions of a National Settlement Pattern.” Urban Studies, 17: 223–230. Duncan, O. D., 1980. An Examination of the Problem of Optimum City-Size, New York: Arno Press. Hansen, N., 1975. The Challenge of Urban Growth, Lexington, MA: D. C. Heath & Co. Tilly, C., 1984. Big Structures, Large Processes, Huge Comparisons, New York: Russell Sage Foundation. Coleman, J. S., 1990. Foundations of Social Theory, Cambridge, MA.: Harvard University Press. Weber, M., 1958. “The Protestant Ethic and the Spirit of Capitalism.” In: H. W. G. and C. W. Mills (eds.). Essays in Sociology. Oxford: Oxford University Press: 168–178. Dahl, R., 1989. Democracy and Its Critics, New Haven: CT, Yale University Press. Sartori, G., 1987. The Theory of Democracy Revisited, Chatham, NJ: Chatham House Publishers. Ostrom, V., 1991. The Meaning of American Federalism: Constituting a Self-Governing Society, San Francisco, CA: ICS Press.
SCALING IN INTEGRATED ASSESSMENT 103
31. Ostrom, V., 1997. The Meaning of Democracy and the Vulnerability of Democracies: A Response to Tocqueville’s Challenge, Ann Arbor: University of Michigan Press. 32. Barber, B. R., 1992. “Jihad v. McWorld.” Atlantic Monthly, 269: 59–65. 33. Arrow, K., 1951. Social Choice and Individual Values, New York: John Wiley & Sons. 34. Enelow, J. M., 1997. “Cycling and Majority Rule.” In: D. C. Mueller, (ed.). Perspectives on Public Choice. New York. Cambridge University Press: 149–162. 35. Plott, C., 1967. “A Notion of Equilibrium and its Possibility Under Majority Rule.” American Economic Review, 57(4): 787–807. 36. McKelvey, R. D., 1976. “Intransitivities in Multidimensional Voting Models and Some Implications for Agenda Control.” Journal of Economic Theory, 12: 472–482. 37. Schofield, N., 1978. “Instability of Simple Dynamic Games.” Review of Economic Studies, 45: 575–594. 38. Shepsle, K. A., 1979a. “Institutional arrangements and equilibrium in multidimensional voting models.” American Journal of Political Science 23(1): 27–59. 39. Shepsle, K. A., 1979b. The role of institutional structure in the creation of policy equilibrium. In: Rae, D. W., Eismeier, T. J. (eds.), Public Policy and Public Choice. Sage Yearbooks in Politics and Public Policy, vol. 6. Beverly Hills, CA: Sage: 249–283. 40. Dawes, R. M., 1980. “Social dilemmas.” Annual Review of Psychology, 31:169–193. 41. Hardin, R., 1982. Collective Action, Baltimore, MD: Johns Hopkins University Press: pp. 248. 42. Ostrom, V., C. M. Tiebout, and R. Warren, 1961. “The Organization of government in metropolitan areas: a theoretical inquiry.” American Political Science Review, 55(4): 831–842. 43. Ostrom, V. and E. Ostrom, 1977. Public Goods and Public Choices. In: Alternatives for Delivering Public Services: Toward Imporoved Performance, E. S. Savas, (ed.), Boulder, CO: Westview Press. 44. Ostrom, E., 2000. “The Danger of Self-Evident Truths.” Political Science and Politics, 33(1): 33–44. 45. Parks, R. B. and R. J. Oakerson, 1993. “Comparative Metropolitan Organization: Service Production and Governance Structures in St. Louis (MO) and Alleghany County (PA).” Publius, 23(1): 19–39. 46. Oakerson, R. J., 1999. Governing Local Public Economies: Creating the Civic Metropolis, Oakland, CA: ICS Press. 47. McGinnis, M., 1999b. Polycentricity and Local Public Economics: Readings from the Workshop in Political Theory and Policy Analysis, Ann Arbor: University of Michigan Press.. 48. McGinnis, M., 1999a. Polycentric Governance and Development. Ann Arbor: University of Michigan Press.
104 SCALING ISSUES IN THE SOCIAL SCIENCES
49. Kiser, L. L. and E. Ostrom, 1982. The Three Worlds of Action: A Metatheoretical Synthesis of Institutional Approaches In: Strategies of Political Inquiry, E. Ostrom, (ed), Beverly Hills, CA: Sage: 179– 222. 50. Ostrom, E., R. Gardner, and J. M. Walker, 1994. Rules, Games, and Common-Pool Resources, Ann Arbor: University of Michigan Press. 51. Peterson, P., 1981. City Limits, Chicago: University of Chicago Press. 52. Schweik, C. M., K. R. Adhikari, and K. N. Pandit, 1997. “Land Cover Change and Forest Institutions: A Comparison of Two SubBasins in the Siwalik Hills of Nepal.” Mountain Research and Development, 17(2): 99–116. 53. Schweik, C. M. and G. M. Green, 1999. “The Use of Spectral Mixture Analysis to Study Human Incentives, Actions, and Environmental Outcomes.” Social Science Computer Review, 17(1): 40–56. 54. Koontz, T., 1997. “Difference Between State and Federal Public Forest Management: The Importance of Rules.” Publius: The Journal of Federalism, 27(1): 15–37. 55. Schweik, C. M., 1998. The Spatial and Temporal Analysis of Forest Resources and Institutions. Bloomington, Indiana: Indiana University. 56. Wilson, J., B. Low, R. Costanza, and E. Ostrom, 1999. “Scale Misperceptions and the Spatial Dynamics of a Social-Ecological System.” Ecological Economics, 31(2): 243–257. 57. Gilpin, M., 1996. Metapopulations and wildlife conservation: Approaches to Modeling Spatial Structure In: Metapopulations and Wildlife Conservation, D. R. McCullough, (ed.), Washington, D. C.: Island Press: 11–27. 58. Entwisle, B., S. J. Walsh, R. Rindfuss, and A. Chamratrithirong, 1998. “Land-Use/Land-Cover and Population Dynamics, Nang Rong, Thailand.” In: People and Pixels, E. F. M. a. P. C. S. Diana Liverman, (ed.), Washington D.C.: National Academy Press: 121–144. 59. Wood, C. H. and D. Skole, 1998. Linking Satellite, Census, and Survey Data to Study Deforestation in the Brazilian Amazon In: People and Pixels, E. F. M. a. P. C. S. Diana Liverman, (ed.), National Academy Press: 70–93. 60. Moran, E. F. and E. Brondizio, 1998. “Land-Use change after deforestation in Amazonia.” In: People and Pixels, E. F. M. a. P. C. S. Diana Liverman, (ed.), Washington D.C.: National Academy Press: 94–120. 61. Brondizio, E., S. McCracken, E. F. Moran, A. Siqueira, D. Nelson, and C. Rodiguez-Pedraza, 1999. “The Colonist Footprint: Towards a conceptual framework of deforestation trajectories among small farmers in frontier Amazonia.” In: Patterns and Processes of Land, Use and Forest Change in the Amazon, C. Wood, (ed.), Gainesville, FL: University of Florida Press. 62. Moran, E. F., E. Brondizio, and S. McCracken, 1999. “Trajectories of Land Use: Soils, Succession, and Crop Choice.” In: Patterns and
SCALING IN INTEGRATED ASSESSMENT 105
63. 64. 65. 66. 67. 68. 69.
70.
71.
72.
73.
74. 75.
76.
77.
78.
Processes of Land Use and Forest Change in the Amazon, C. Wood, (ed.), Gainesville, FL: University of Florida Press. Boyd, L. H. and G. R. Iverson, 1979. Contextual Analysis: Concepts and Statistical Techniques. Belmont, CA: Wadsworth. Bryk, A. S. and W. Radenbush, 1992. Hierarchical Linear Models: Applications and Data Analysis Methods. Newbury Park, CA: Sage. Simon, H. A., 1962. “The Architechture of Complexity.” Proceeedings of the American Philosophical Society, 106(6), 467–482. Allen, T. F. H. and T. B. Starr, 1982. Hierarchy: Perspectives for Ecological Complexity, Chicago: University of Chicago Press. O’Neill, R. V., A. R. Johnson, and A. W. King, 1989. “A hierarchical framework for the analysis of scale.” Landscape Ecology, 3: 193–205. Norton, B. G. and R. E. Ulanowicz, 1992. “Scale and biodiversity policy: a hierarchical approach.” Ambio, 21(3): 244–249. Openshaw, S., 1977. “A geographical solution to scale and aggregation problems in region-building, partitioning, and spatial modelling.” Transactions of the Institute of British Geographers, 2: 459–72. Openshaw, S. and P. J. Taylor, 1981. “The modifiable areal unit problem.” In: Quantitative Geography: A British View, N. Wrigley and R. J. Bennett, (eds.), London: Routledge. Green, M. and R. Flowerdew, 1996. “New evidence on the modifiable areal unit problem.” In: Spatial Analysis: Modelling in a GIS Environment, P. Longley and M. Batty, (eds.), New York: John Wiley and Sons. Voinov, A., R. Costanza, L. Wainger, R. Boumans, F. Villa, T. Maxwell, and H. Voinov, 1999. “Patuxent Landscape Model: Integrated Ecological Economic Modeling of a Watershed.” Environmental Modelling & Software Journal, 14(5): 473–491. Voinov, A. and R. Costanza, 1999. “Landscape Modeling of Surface Water Flow: 2. Patuxent Case Study.” Ecological Modelling, 119: 211–230. Vanclay, J. K., 1998. “FLORES: for exploring land use options in forested landscapes.” Agroforestry Forum, 9(1): 47–52. Evans, T. P., A. Manire, F. DeCastro, E. Brondizio, and S. McCracken, 2001. “A dynamic model of household decision making and parcel level landcover change in the Eastern Amazon.” Ecological Modeling, 143: 95–113. Grove, M., C. Scweik, T. P. Evans, and G. Green, 2001. “Modeling Human-Environmental Dynamics.” In: K. C. Clarke, B. E. Parks, and M. P. Crane (eds.), Geographic Information Systems and Environmental Modeling: Prentice-Hall. Deadman, P., R. D. Brown., and H. R. Gimblett, 1993. “Modelling Rural Residential Settlement Patterns with Cellular Automata.” Journal of Environmental Management, 37: 147–160. Costanza, R., F. H. Sklar, and M. L. White, 1990. “Modelling coastal landscape dynamics.” Bioscience, 40: 91–97.
106 SCALING ISSUES IN THE SOCIAL SCIENCES
79. DeAngelis, D. L., and L. J. Gross, 1992. Individual-based Models and Approaches in Ecology: Populations, Communities, and Ecosystems, New York: Chapman & Hall.
6 Sustainability and Economics: a Matter of Scale? 1
2
CARLO C. JAEGER AND RICHARD TOL 1 Department of Global Change and Social Systems, Potsdam Institute of Climate Impact Research, Potsdam, Germany 2 Centre for Marine and Climate Research, Hamburg University, Hamburg, Germany; Institute for Environmental Studies, Vrije Universiteit, Amsterdam, The Netherlands; Centre for Integrated Study of the Human Dimensions of Global Change, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Abstract Sustainability is a global issue concerning future generations, but steps towards sustainable development must also be taken at the spatial scales of regions and at the temporal scales of individual lives. Different scales matter in social networks and in cultural realities, too. The fact that relatively small regions can dominate global markets for products based on continuous innovation points to the accumulation of a specific social capital in these regions. This resource is a club good at the regional scale. Similar goods exist at national scales. Their development depends to a considerable extent on expectations that play a very different role in the short and in the long run. In the former, the efficient market hypothesis seems a reasonable approach. In the latter, very different approaches need to be developed. Understanding how processes at the regional, national and global level interact in the short and in the long run will be vital for a successful management of the transition towards a sustainable world economy.
Acknowledgements This paper owes much to careful comments by Tom Evans and to discussions in Harry’s club, an informal gathering of economists which originated at the EFIEA workshop on Uncertainty, held in Baden, Austria, in July 1999. The
108 SUSTAINABILITY AND ECONOMICS
Michael Otto Foundation for Environmental Protection provided financial support. The usual disclaimers apply.
Introduction Over the last decades, it has become increasingly clear that the world economy is on an unsustainable path. Is a transition to a sustainable society possible in the 21st century? In other words: can our grandchildren live in a society that would look at global environmental disruption no more as a threat to its future, but as an expierence of the past? The question is as urgent as it is difficult. A sustainability transition, if it will happen at all, will be a complex process, involving many different scales – in spatial, temporal, and institutional terms. It will take a long-lasting process of integrated assessments to develop an awareness of where we stand in relation to this challenge, and where we can – and cannot – go. These assessments will need to keep the spatial, temporal and institutional scales of everyday human life in sight if they are to be practically meaningful. These are the scales where it makes a difference whether two people talk to each other from a distance or whether they are so close as to be able to touch one another. They are the scales at which it makes a difference whether we leave home for a weekend or for a two weeks holiday, at which we make career choices, develop diet habits, go for a walk or take a nap. The sustainability transition cannot be grasped at the scales of everyday life only. Integrated assessments dealing with the sustainability transition need to take into account an “astronaut’s perspective” as well. The diet habits of billions of people are geared to changes of the whole earth system, and so are their choices about which transport systems to use, where to live, etc. Switching between different scales of analysis can result in amazing journeys, however. A simple triangle suffices to illustrate what kind of surprises can arise.1 As Figure 6.1 shows, on a Euclidean plane, summing up the angles of a triangle yields a straight line, or an angle of 180°. In the figure, lines c and c* are parallel, so that the angles  and * are equal, as are the angles ␣ and ␣*. This fact is hard to prove, however, and may rather be introduced as an axiom.2 Our experience with pencil and paper, or with rigid structures that we may lay out on the ground, shows that such axioms actually work in such settings – they lead to sound inferences and sound constructions, they help us to make accurate observations and to engage in successful actions.
1
The following argument builds on Putnam [1]. An alternative route would be to introduce as an axiom that the angles of any triangle sum up to 180° and to deduce the equality of angles ␣ and ␣*. 2
SCALING IN INTEGRATED ASSESSMENT 109
c* α∗ γ b
β∗ a
β
α c Figure 6.1: Sum of angles in a Euclidean triangle.
Now consider a triangle on a sphere. Here it is perfectly possible to construct triangles with three right angles. On the surface of planet earth, one may imagine a triangle with one angle at the North pole and the other two on the equator, the distance between the latter two being just one quarter of the equator. At the scale of human individuals living on the surface of the same earth, however, Euclidean geometry works perfectly well. Of course, one may be tempted to say that it works only as an approximation, but this misses the point. If a designer wants to cover the floor of a living-room with triangular tiles she cannot improve the design by making calculations with triangles whose angles sum up to a tiny little bit more than 180o. On the other hand, if trajectories of three satellites were computed on the basis of the assumption that they cannot be all orthogonal to each other, this would lead to nonsensical results. So far we have considered spatial metaphors, but the argument extends to temporal scales. When singers perform an opera, it is perfectly – not approximately – clear what it means for them to start a duet at the same time. When astronomers study two supernova explosions, they must come to terms with the fact that these explosions may be simultaneous if observed from one point in space and not simultaneous if observed from another one.3 Temporal scales are of additional interest when they relate to non-linear dynamics. Complex patterns may emerge and fade away at different scales, sometimes yielding stable structures at one temporal scale and chaotic dynamics at another one (see Christiansen and Parmentier [2], Sterman [3]).
3
This is the kind of reasoning which differentiates relativity theory from classical mechanics. It involves a further version of non-Euclidean geometry, now couched in four-dimensional “space”.
110 SUSTAINABILITY AND ECONOMICS
Scale issues are not limited to physical reality, they arise in mental structures, too. A domain of discourse may have a logical structure under which it is possible to assess for any single sentence whether it is true or false, but not for all sentences at once.4 Like a rug in the carpet, the domain of uncertainty wanders around whenever new certainties are found. Even logical laws may hold at one scale of analysis and fail at another one.5 Spatial, temporal, and logical scales interact if one considers an integrated assessment incorporating a component of demographic change. At one scale a population can be represented by fairly simple rates of fertility, mortality and net migration. At another scale, the decision of an individual to (try to) have a child is a highly complex decision, and modeling this decision-making process is incredibly difficult. In some situations, it may be perfectly sufficient to understand the net result of these complex decisions at a global scale (e.g., through aggregation and projections of national census data). However, in other cases it is important to analyze this decision-making process at an individual level in order to understand the mechanisms triggered by different policies. Different laws may hold at different scales, and one can try to establish rules for the transition between these different domains.6 Even in everyday life, such transitions are widespread, as we sometimes realize when switching from one social setting, e.g., an empty bus, to another one, e.g., a crowded baseball stadium. Of course, sometimes huge scale changes are possible without inducing any change in relevant patterns, laws, etc.: it was the great triumph of Newtonian mechanics to show that stones and stars follow the same laws of motion. But sometimes moving from one scale to another comes close to traveling between different worlds. While traveling, we may encounter a twilight zone of ambiguity. But as the distinction between day and night is perfectly sound although no clear-cut boundary separates the two, so the distinction between small and large scales may often be perfectly sound even if a gradual transition between them is possible. And of course, more than two scaling levels are relevant in many domains of inquiry. Perhaps, little more can be said about scale issues in general. Rather, it seems necessary to consider such issues case by case. In economics, there are many instances of scaling issues – the micro-macro problem is one of the big challenges of our discipline, returns to scale one of its classical topics (see Gold [5]). We consider the latter in a brief overview of orthodox economic methodology and its core assumptions about returns to scale. Then, we will 4
Such patterns are known from quantum logic. There is no reason why similar structures should not be widespread in the fabric of human knowledge. 5 This led mathematical intuitionists to argue that the logical law of the excluded middle holds for finite sets, but not necessarily for infinite ones. 6 This is a major issue in geographical research, ranging from landscape ecology to remote sensing (see Wong and Amrhein [4]).
SCALING IN INTEGRATED ASSESSMENT 111
focus on two particular instances of scaling issues in economics that are especially relevant for the sustainability transition: the role of innovative regional economies on global markets and the role of expectations in the short and the long run.7 We will conclude with a few remarks about the relevance of scale issues for bringing about a sustainable world economy.
Returns to Scale According to a famous definition, economics is the study of the allocation of scarce resources.8 Integrated assessment is multi-disciplinary, policy-relevant research that is typically carried out on complex environmental issues. A key problem with the environment is that what once was abundant, now is scarce. Economics is thus one of the core disciplines of integrated assessment. Economics is also a controversial discipline. This is partly because the discipline has taken a unique route in its quest for knowledge accumulation and partly because other researchers do not always take enough time to understand economics and economists are not always patient enough to explain their ways. In contrast to other social sciences, the defining methodological characteristic of orthodox economics is mathematical rigor.9 That is, an economist starts with a series of basic assumptions (treated as axioms, first principles, laws) from which higher order characteristics are deduced. Rigor is assumed to be a prerequisite for true understanding of economic phenomena. In the early days, the basic assumptions were rather limited in number and scope. The predictive power was, therefore, rather limited as well. Over the years, however, the set of basic assumptions has been extended and refined so that economists now have a clearer understanding of many observed phenomena in economies. Few non-economists are aware of the advances of orthodox economics. In fact, a lot of people have a rather outdated image. Applied economists, the ones that enter into the discussion with other disciplines, often rely on older methods, assuming perfectly competitive markets, unboundedly rational actors, full employment of labor and other resources, and so on. This is sometimes because of convenience. More often, however, this is because it was shown (in 7
Both shed some light – or perhaps twilight – on the micro-macro problem. “Economics is the science which studies human behaviour as a relationship between (given) ends and scarce means which have alternative uses.” [6: p16]. 9 This is not to deny that other social science disciplines have produced gems of mathematical rigor. A case in point is the work in linguistics inspired by the correspondences between the idea of a grammar and the concept of a Turing machine (see Martin-Vide [7]). But the community of linguists – perhaps to their advantage – has always cultivated a methodological pluralism which includes historical, interpretive and other approaches. 8
112 SUSTAINABILITY AND ECONOMICS
some paper unknown outside the economic community) that, for this particular environmental application, the older, simpler, more convenient assumptions lead to the same conclusions as do the newer, more complex, more elaborate assumptions. There is no “scale theory” in economics, in fact, scale is hardly treated as an issue in economic textbooks.10 At the same time, scale issues are pervasive, and labelled, treated and analysed in so many ways that a survey would be an enormous task. Standard economic models use production functions that have a property known as “constant returns to scale”. That is, the production function is such that if all inputs are multiplied by a factor then output is multiplied by the same factor . If output increases by less than a factor , we speak of decreasing returns to scale. If output increases by more than a factor , we speak of increasing returns to scale. The standard assumption of constant returns to scale has major implications for the influence of scale issues in economic models.11 One implication is that the structure of the economy is essentially scale-independent. This is quite convenient. It is often possible to use representative consumers and producers, because heterogeneity in size does not matter. It is possible to ignore stochasticity, because temporary size changes do not matter. Another implication of the assumption of constant returns to scale is that the modelled economy does not have a spatial structure. As size does not matter, agglomeration does not matter either. A constant returns to scale economy is an economy without cities. Finally, constant returns to scale imply that specialisation does not pay. Starting from the same initial conditions, every firm and every country produces the same, broad range of goods and services. The world of decreasing returns to scale does not differ fundamentally from the constant returns to scale world. In fact, the two go neatly together, where the assumption of decreasing returns to scale explains why inputs that are in principle substitutable (e.g., capital and labour) are in practice observed together.
10
One would find reference to “returns to scale”, which is a property of the production function rather than an issue of scale. The common assumption of “constant returns to scale” does have major scale implications, which are treated below. 11 At first sight, the economic notion of scale is one of speed: what varies is the amount of goods produced per time unit. Of course, this influences the stock of durable goods available. In chemistry, a similar distinction arises between fast and slow reactions with their effects on concentrations of different substances. In climate dynamics, atmospheric processes are fast in comparison with oceanic ones, again with implications for con-centrations of various substances like greenhouse gases. In economics, greater production often goes along with a greater extent of the market, both in financial and in geographical terms. In this sense, the economic notion of scale is more than just a temporal one.
SCALING IN INTEGRATED ASSESSMENT 113
The main reason why standard economic models assume constant and decreasing returns to scale is rigor. The analytical and numerical tools available to the pioneers of standard economics did not allow them to explore the consequences of alternative assumptions, particularly that of increasing returns to scale. With progress in mathematical and computer power, less restrictive assumptions have been explored, and the exact role of the original implications has been considerably clarified. The world implied by the assumption of increasing returns to scale is radically different. Increasing returns to scale invoke (often local) positive feedbacks in the economic system. This brings strong path dependence with it, as there are asymmetries between growth and shrink rates. One ignores heterogeneity and stochasticity at one’s peril with increasing returns to scale, as size – and thus random variations in size – matter if growth rates depend on initial size. Agglomeration and specialisation effects can be modelled using the assumption of increasing returns to scale. A dynamic variant of increasing returns to scale is learning-by-doing. Learning-by-doing means that experience gained with production today leads to lower average production costs tomorrow. The assumption of constant returns to scale goes hand in hand with the assumption of perfect competition. Increasing returns to scale, however, reduces the number of competitors and so leads to forms of imperfect competition. Vice versa, the profits generated in situations of imperfect competition may activate increasing returns to scale. If not, newcomers will enter the market until competition is perfect. In sum, the assumed properties of the production function (known as “returns to scale”) have a profound effect on generic scale issues in economics. If the production function has constant or decreasing returns to scale, scale does not matter. The economy looks the same through binoculars and a magnifying glass. On the other hand, if the production function has increasing returns to scale, scale matters in the economy. Standard macro-economic models – the type usually used for integrated assessment studies – assume constant and decreasing returns to scale. If one zooms in at smaller scales, research questions and methodologies change. The simplified assumptions made to make large-scale economic models work are obviously invalid at smaller scales. Full information, optimising behaviour, divisibility of goods, continuous substitutability, and flexible labour markets have no place in micro-economics, although they do in macroeconomics.12
12
For a thoughtful reflection on the complex relations between micro- and macroeconomics, see Hahn and Solow [8]. One implication of their work is that temporal scales present the bigger intellectual challenge for economic analysis than varying scales of production. We will discuss temporal scales in the section Back to the Future.
114 SUSTAINABILITY AND ECONOMICS
The question is, of course, does the detail of the small scale influence the patterns of the large scale? The answer to that question lies in the dynamics of the system. In a considerable number of cases, research has shown that there is no devil in the detail. In other cases, the jury is still out. We investigate two of those below. Industrial Districts in the World Economy Globalization is not a new stage in the development of economic life, it is the intensification of a pattern that characterizes modern society since its beginnings. At the end of the Middle Ages, the ability to transcend national boundaries and to span the whole globe set the modern economy apart from older ways of life. This ability was based on two interlocked socio-cultural arrangements: open markets and specialized professions. Of course, most actual markets are not completely open. They have entry barriers, sometimes quite high ones. But these can be overcome in perfectly legitimate ways, and this sets modern markets apart from typical structures of, say, feudal societies. And of course, economic activities are by no means always performed in a professional style. Unskilled labor plays a very important role in the history of capitalism. But without the skills provided by specialized professionals geared to a body of scientific knowledge, no global transport infrastructure and no industrial production system would have emerged, let alone a global culture sharing the music of Bach and the art of making pizza. By now, the life of most, if not all human beings is interconnected via the global economy – be it through a direct or a very distant indirect connection – and they know it. What is surprising, then, is how important local conditions remain for economic activities. Silicon valley is a remarkably small area compared with the global reach of its products, and the same holds for the Hollywood movie industry. Since several centuries, the City of London is the premier location for financial activities in Europe – technological revolutions, wars, the breakdown of the Empire, and the rise of the Dollar did not overcome this amazing local singularity. Since Marshall’s attempts at understanding the role of industrial districts in terms of localized positive external effects, many models of regional economies have been suggested and new models will now doubt be proposed in the years to come [9, 10, 11]. For our present purpose – looking at scaling issues in view of the sustainability transition – two points may be emphasized. First, successful regions in today’s global economy share some good that is neither public nor private. In economics, this distinction is usually drawn in terms of exclusiveness and rivalry: a private good is one from whose use othes can be easily excluded and whose use by one agent reduces the quantity available for use by other agents. If you put a piece of cake on your plate, it is not trivial for me to grab it; if you eat the cake, I cannot eat it anymore. On
SCALING IN INTEGRATED ASSESSMENT 115
the other hand, it is hard for me to exclude you from using the English language, and if you do use it, it is by no means less available for me. There is something about places like Silicon valley or the City of London that looks like a public good from inside these places: firms operating there can hardly be excluded from it, and by using it, they don’t make it less available for their competing neighbors. That something may involve a contact network, a set of procedures, an evolving body of knowledge, shared facilities for producing some kind of goods, etc. Looked at from the outside, however, it looks like a private good: outsiders are excluded from its use, and if they were not, they would impair it. The kind of social network evolving over centuries in the City of London would break down if all of a sudden it had to involve ten times as many members. This resource is a kind of club good.13 Club goods are defined as similar to public goods up to a point: use of the good is non-rival only up to a certain number of users. And the first point to note about today’s successful regional economies is that they do share some kind of regional club good. The regional scale seems due to basic features of human existence – the importance of face-to-face contacts, the changing character of conversations when more than a certain number of people is involved, the distance that can be covered in a few minutes by walking, the combinatorial explosion of possible binary relations when a social network grows in size, etc. The second point to note is that regional club goods not only can make certain activities less costly than similar activities would be elsewhere, but that they can also greatly facilitate on-going innovation processes. Taken together, these mechanisms bring about the tremendous competitive advantage that some regions can build up over time. For newcomers, it is extremely difficult to challenge such regions, because building up a similar club good elsewhere takes time, is costly and involves a serious risk of failure. The incumbent region can finance continuous investment in its own club good out of the rent that the competitive advantage allows it to collect. This can literally mean that landlords in the incumbent region fund the kind of social life that maintains the regional club good. As a result, incumbent regions are advantaged in their specialization both by Heckscher-Ohlin kind of trade based on static comparative advantages and by monopolistic competition based on dynamic innovation rents as analyzed by endogenous growth theorists. A similar analysis can be applied at the national scale, too. From a global point of view, the nation state sometimes looks more like a provider of club goods than of the pure public goods that are sometimes invoked to justify its existence. After all, there are national institutions, infrastructures, cultures and identities. In the course of history these can crystallize into shared national resources that confer competitive advantages in specific fields. From time to time nations are challenged in their economic specialization, and then their future 13
The role of clubs in the history of the City of London invites a footnote about how appropriate it is to consider club goods when studying regional economies.
116 SUSTAINABILITY AND ECONOMICS
will depend on whether they are able to either renew their existing competitive advantage or to build up another one in a new field [12]. At the global level, it is clear that so far only Western Europe, North America, Japan and Australia have been able to firmly establish a capacity to generate economic innovations – personal computers, mobile phones, weather satellites, color TV, container transport, etc. – at a sustained rate [13]. Parts of South-East Asia, the Middle East, Eastern Europe and Latin America seem able to absorb these innovations in processes of economic growth still driven more by capital accumulation than by technical progress. But the large rest of the world misses out in the fierce competition characterising todays global markets. This competition is a very different animal than classical and neoclassical perfect competition, but it certainly has little mercy with those who don’t have the regional and national club goods required to innovate at the pace it requires. With regard to the sustainability transition, this is a worrying situation. Take the global energy system.14 Currently, about 95% of all commercial energy worldwide is produced by burning fossil fuels. In the coming decades, world population is bound to increase, mainly in those parts of the world that have a hard time in generating the innovations that are driving today’s knowledge economy. At the same time, one would hope that these very parts of the global economy be the ones enjoying the fastest growth in income per head, so as to overcome the scandalous inequality of income across the globe. Sustainability is not meant to consist only in environmental constraints, it is meant to balance environmental with economic and social concerns. How then can one envisage a transition towards a more sustainable energy system under such conditions? This will be impossible unless new dynamics of economic development set in at the global, national and regional scales. Back to the Future For the purposes of integrated assessment, temporal scales of analysis are as important as spatial ones. For economics, they may actually be even more important. Economic dynamics is often analyzed in terms of a single intertemporal equilibrium path. This has been done by Ramsey [15] for a onegood world, and Nordhaus [16] has used a variant of Ramsey’s scheme to analyze optimal climate policies in the long run. Arrow and Debreu [17] have suggested an intertemporal equilibrium for a world with many goods – the intricacies of such a world, however, lead to a much less suggestive picture than the one-good case. To maintain that picture, computable general equilibrium models are usually framed in such a way as to reproduce the basic dynamics of a one-good world.15 14
See the discussion in Imboden and Jaeger [14]. Nordhaus and Yang [18] is a good example. Two main devices preserve the simple dynamics of a one-good world. First, a representation of investment which treats
15
SCALING IN INTEGRATED ASSESSMENT 117
In the full-blown many-goods case, intertemporal equilibrium would allocate resources over time in the same way as it allocates them among sectors at any given point in time. A myriad of own-rates of interest for different goods at different moments in time must then result.16 It takes a lot of faith (if not plain ignorance) to subscribe to the assumptions needed to blend them into an overall rate of interest ruling intertemporal capital markets. It seems much more sensible to build on the distinction between two scales well-known in economics: the short-run and the long-run. As with the triangle discussed in the introduction, very different patterns prevail at these two different scales – but now it is temporal, not spatial scale that matters. From a management point of view, short-run decisions are about how much to produce in order to serve actual demand on present markets. For the purpose of short-run decisions, production capacity and available technology are treated as given – because to modify them is a long-run process. Long-run decisions are about how to meet not actual, but expected demand. In long-run decisions, various kinds of fixed capital goods are purchased and produced in view of future demand which is not yet effective on present markets. When studying long-run economic dynamics, supply and demand for goods traded on actual markets may be assumed to operate at equilibrium levels because their adjustment via price mechanisms is much faster than the development of various kinds of fixed capital relevant for long-run economic dynamics. As a result, at each moment in time one gets a temporary general equilibrium contingent on investors’ expectations for future demand.17 Present decisions depend on expected futures, and actual futures depend on present decisions. This is the challenge of endogenous uncertainty in economics.18 Climate change is a remarkable example of the complex interplay between long and short-run developments in todays world economy. The Framework Convention on Climate Change is part of today’s international environmental diplomacy, and it fosters studies of how to develop technologies and products capital goods as some sort of homogeneous jelly which can be transformed costlessly in different kinds of infrastructure. And second, a representation of demand which eliminates income effects resulting from the production of different goods. 16 Suppose somebody sells a quantity of wheat today for whatever amount of money the market allows for. Now suppose there is a futures market for wheat and consider the quantity of tomorrow’s wheat which that amount of money can buy today. Then, the difference between these two quantities of wheat divided by the initial quantity is the wheat own-rate of interest. There is no reason why it should be equal to the own-rate of interest of, say, bricks. Otherwise, no changes in relative prices would be possible and therefore no adjustment of production patterns to demand for future products. (The analysis of ownrates of interest goes back to Sraffa’s critique of Hayek [19]). 17 A model of this kind is proposed by Morishima [20]. 18 A promising approach to the problem of endogenous uncertainty has been proposed by Kurz [21].
118 SUSTAINABILITY AND ECONOMICS
that could help reducing greenhouse gas emissions. These, in turn, are part of R&D efforts undertaken by today’s governments and businesses. The possible disruptions of the global climate system that these measures are supposed to mitigate, however, are likely to reach their climax way after the year 2100 [22]. This is due both to the time horizon over which humankind is likely to use the carbon available in the earth crust and to the time constants involved in the climate system, especially its oceanic components. Of course, some future demand is anticipated by futures markets operating in the present. But except for financial assets, futures markets exist only for a few goods and services and only for a very limited time horizon. In most cases, long-term decisions must rely on expectations that are not based on the interplay of futures markets, but on the “animal spirits” of investors [23]. For climate economics, this means that actual markets can hardly be expected to take care of climatic risks. There are no futures markets for land in Bangladesh in the year 2100 – and if there were, purchasing power disparities would give little weight to the interests of Bangladeshi people in the decision-making process. The difference between short-run and long-run economic decisions raises two major questions with regard to the role of expectations for the sustainability transition: ■
■
How do economic agents form their expectations about environmental changes that may happen in the long-run? How do such expectations influence long-run economic decisions?
As for the first question, we know at least four patterns of expectation formation that are relevant here. The first one is plain knowledge acquisition. In the case of climate change, scientific research can show that certain outcomes – say a sea level rise of 1m within 1 century – are possible as a consequence of human greenhouse gas emissions, while other events – say a sea level rise of 10 m within 10 years – are not. Usually, climate change research establishes that some event is possible or not, without attaching objective probabilities to it, although vague notions of “being likely” are often expressed by the experts from the relevant field. In some cases, the relevant knowledge is a public good available to economic agents at no cost. Where information acquisition is costly, economic agents may try to acquire as much information as needed to take a decision that they consider satisfactory both in its expected outcome and in the reliability of this expectation. This is the second pattern: satisficing behavior along the lines of bounded rationality. The third pattern is Bayesian learning [24]. If some economic agent – say a multinational oil company – considers climate change as highly unlikely on the basis of the evidence available at some point in time, that agent may change her expectation as additional evidence – say about long-term temperature trends – becomes available. The fourth pattern is pure guessing. This lies at the heart of notions of subjective probability, but also of the state-preference approach pioneered by Arrow [25] and Debreu [26]. Just
SCALING IN INTEGRATED ASSESSMENT 119
as human beings can prefer apples to oranges, they can prefer one uncertain situation to another one – even if they know nothing about relative frequencies. But while recent work by Becker [27] and others has begun to shed some light on the formation of preferences, so far little is known about the formation of those subjective probabilities that enter Bayesian learning as priors. As for the second question, it is far more intricate. The role of expectations for economic dynamics lies at the heart of some of the most important debates both in economic theory and policymaking. According to the efficient market hypothesis (EMH), at any moment in time current prices reflect all available information in the most accurate manner possible, taking into account different preferences and endowments in wealth.19 Current prices here include spot prices for future contracts – today’s spot price for wheat harvested one year in the future gives the expected value of the future price, discounted with today’s rate of interest. EMH is based on the idea that arbitrage will eliminate imperfections in information processing by market participants. We do not want to overemphasize the importance of EMH as such, rather we use it as a powerful way of addressing the difference between two temporal scales in economics. For policy purposes, EMH gives support to a general approach of laissez-faire and to general skepticism of discretionary – as opposed to rule-based – policies. In the case of climate policy, for example, economic agents should be given access to the best possible scientific evidence and enabled to work out its economic implications for themselves. Two objections to EMH that are relevant here.20 First, many real world markets are imperfect by the standards of the theory of competitive markets, and one may ask whether this jeopardizes the capability of economic agents to deal efficiently with the risks of climate change. The global oil market is clearly shaped by a few multinational corporations, none of which is faced with the horizontal demand function characteristic for firms operating under perfect competition. (Similar situations arise for other markets with key relevance for climate change). Bounded rationality provides ways to describe such markets without buying in to EMH. Under such descriptions, expectations are formed according to adaptive learning: economic agents observe current events and try to learn from the past. According to EMH, however, past prices provide no additional information to current prices when trying to forecast future price dynamics. Nevertheless, the fact that real oil prices are remarkably sticky since more than one hundred years is something one would not wish to ignore when studying climate policy. When oil prices underwent major shocks, these have been linked to large scale recessions, and even wars, until after a while they returned to their long-term level. 19
The origins of EMH go back to Samuelson [28]. A recent criticism of EMH relating both to the short and the long-run is to be found in Lo and MacKinlay [29]. 20
120 SUSTAINABILITY AND ECONOMICS
One answer to this experience has been provided by real business cycle theory, which argues that markets are efficient but that they take some time to work out the implications of exogenous shocks. This has been shown to be plausible even if economic agents do not form their expectations by adaptive learning, but if instead they are characterized by rational expectations.21 The latter concept provides a formalized version of Hayek’s [31] argument that no centralized authority can match the information processing capability of decentralized markets. Economic agents can know the structure of the economy at least as well as governments can. Accordingly, they can work out the implications of exogenous shocks for the system as a whole without any need for government assistance. The second objection to EMH, which may turn out to be even more important for integrated assessments regarding the sustainability transition, is based on empirical evidence concerning stock markets. In stock markets, the volume of trading and the volatility of prices are much larger than reasonable applications of EMH suggest. A possible explanation combines the absence of futures markets with adaptive learning by heterogeneous agents to understand the imperfections of actual stock markets. While this is certainly an important line of research, a more comprehensive approach is possible thanks to recent research. The Debreu-Sonnenschein-Mantel theorem ([32], see also Kurz [21]) shows that theoretically sound general equilibrium models produce not one equilibrium, but a whole set – which may be finite, countable, or uncountable – of equilibria. Only extreme assumptions can guarantee uniqueness of equilibrium. Clearly, this faces economic agents with a major coordination problem in equilibrium selection [33]. As Schelling [34] has shown, such problems may be solved by establishing some focal point of attention. They cannot be solved by utility maximization, however, because all relevant equilibria are Pareto optima in their own right. Even if endowed with rational expectations and a full suite of futures markets, therefore, economic agents would need some additional mechanism of expectation formation if they are to act at all. One such mechanism has been discussed under the label of “sunspot equilibria”, meaning the focussing of expectations by some exogenous event without additional causal impact on the economy. And while nowadays sunspots are a highly implausible candidate for such focussing, expectations of global climate change may well become a plausible one. Two more mechanisms which may be relevant for the sustainability transition deserve our attention. The first is based the role of science in today’s global society: Scientific knowledge claims may focus expectations of investors on certain technological trajectories long before the profitability of such trajectories can be assessed. While this mechanism relates to long-term investment, a completely different mechanism, namely price stickiness, helps economic agents to coordinate their expectations in the short run [35]. 21
The idea of rational expectations was introduced by Lucas [30].
SCALING IN INTEGRATED ASSESSMENT 121
Summing up: expectations may be formed in a variety of ways, some of which are reasonably well understood. Formation of priors and/or state preferences, however, clearly need further research. Expectations may shape the economy in two ways. First, they are essential for the way the economy digests exogenous shocks. And second, they are essential to cope with situations of endogenous uncertainty, where no single equilibrium is given. So far, economic work on expectations has been carried out mainly – but by no means only – in financial economics. Environmental economics has not yet taken advantage of this body of work. In particular, familiar models used in climate economics do not include representations of monetary phenomena. This is no big problem if one subscribes to EMH and real business cycle theory, but even then it is of course insufficient to discuss, say, the impact of carbon taxes on inflation or the role of the financial sector in shaping technical progress. It seems advisable, however, to develop models that would enable us to compare the EMH case with the one of adaptive expectations and the sunspot case. In a first step, this would mean an effort to use findings and methods available from other domains of economic research. As the role of expectations for economic dynamics is by no means settled in contemporary economic theorizing, one might also expect the study of the sustainability transition to contribute to significant advances in general theory. Policy advice referring to global environmental change may be greatly improved by explicitly addressing the role of different time scales in economic decision-making.
Conclusion Scaling issues in economics are not mainly about the up- and downscaling of models and the interpolation between discrete data points, important as these issues are in economics as elsewhere. The main issues concern deep changes in the laws and patterns that govern economic processes at different spatial, temporal and institutional scales. Much further research is warranted to deal with these issues. What should be clear by now, however, is that the role of nation states will need a thorough reappraisal in view of the sustainability transition. Are markets to take care of decisions affecting the short-run future, and governments advised by scientists to take care of long-term decisions about energy systems, urban development, and the like? This would turn Hayek’s [31] argument on its head: the most important decisions would not be taken by markets, but by governments. From the point of view of the global economy, however, nation states may be provider of club goods rather than public goods, and it is not clear whether in the long-run they will be the appropriate providers of the latter [36]. Do we need a complete set of futures markets ranging over the next centuries in order to deal with the challenge of the sustainability transition? Such markets would leave decisions over the future of humankind in the
122 SUSTAINABILITY AND ECONOMICS
hands of that tiny fraction of humans which currently is in a position to take major investment decisions, and it would greatly amplify the risks of speculative bubbles on futures markets. The thorny, if fascinating, issue of global governance calls for major institutional innovations. From an economic point of view, one may wonder whether political institutions are really the one and only kind of institutions to consider for this purpose. Perhaps, far-reaching innovations will take place in the realm of economic institutions, too.22 The sustainability transition can hardly be engineered by some more or less intelligent central agency, but at the same time it cannot take place without new structures and processes of global management. Learning to deal with scaling issues in economics will be vital in order to meet the challenge of sustainability.
References 1. 2.
Putnam, H., 1987. The Many Faces of Realism, La Salle: Open Court. Christiansen, P., and Parmentier R. (eds.), 1988. Structure, Coherence, and Chaos in Dynamical Systems. Manchester: Manchester University Press 3. Sterman, J., 2000. Business Dynamics: Systems Thinking for a Complex World. New York: Irwin/McGraw-Hill. 4. Wong, D., and C. Amrhein. (eds.), 1996. The Modifiable Areal Unit Problem. Special issue of Geographical Systems, 3: 2–3. 5. Gold, B., 1981. Changing perspectives on size, scale, and returns: an interpretive essay. Journal of Economic Literature, 19: 5–33. 6. Robbins, L., 1932. Essay on the Nature and Significance of Economic Science, New York: New York University Press. 7. Martin-Vide, C., (ed.), 1994. Current Issues in Mathematical Linguistics. Amsterdam, North-Holland. 8. Hahn, F., and R. M. Solow, 1997. A Critical Essay on Modern Macro Economic Theory, London: Blackwell. 9. Marshall, A., (1890) 1961. Principles of Economics: An Introductory Volume. London: Macmmillan. 10. Martin, R., and P. Sunley, 1996. Paul Krugman’s geographical economics and its implications for regional development theory: A critical assessment. Economic Geography, 72: 259–92. 11. Feser, E. J., 1998. Enterprises, external economies, and economic development. Journal of Planning Literature, 12: 283–302. 12. Nelson, R. R., 1993. National Innovation Systems: A Comparative Study. New York: Oxford University Press.
22
In this respect, sustainability studies may have much to gain from the seminal analysis in Drucker [37].
SCALING IN INTEGRATED ASSESSMENT 123
13. Sachs, J., 2000. “A New Map of the World.” The Economist, 24 June, 81–83. 14. Imboden, D. M., and C. C. Jaeger, 1999. Towards a Sustainable Energy Future. In: OECD, Energy: The Next Fifty Years, Paris: OECD. 15. Ramsey, F., 1928. “A Mathematical Theory of Saving.” Economic Journal, 38: 543–559. 16. Nordhaus, W. D, 1994. Managing the commons: The economics of climate change. Cambridge, MA: MIT Press. 17. Arrow, K., and G. Debreu, 1954. “Existence of an Equilibrium for a Competitive Economy.” Econometrica, 22: 265–290. 18. Nordhaus, W. D., and Z. Yang, 1996. “A Regional Dynamic General-Equilibrium Model of Alternative Climate-Change Strategies.” American Economic Review, 86: 741–765. 19. Sraffa, P., 1932. “Dr. Hayek on Money and Capital.” Economic Journal, 42: 42–53. 20. Morishima, M., 1992. Capital and Credit. A New Formulation of General Equilibrium Theory. Cambridge: Cambridge University Press. 21. Kurz, M., 1996. “Rational Beliefs and Endogenous Uncertainty.” Economic Theory, 8: 383–97. 22. Manabe, S., and R. J. Stouffer, 1993. “Century-scale effects of increased atmospheric CO2 on the ocean-atmosphere system.” Nature 364: 215–218. 23. Magill, M., and M. Quinzii, 1996. Theory of Incomplete Markets, Cambridge, Massachusetts: MIT Press. 24. Bernardo, J. M., and A. F. M. Smith, 1994. Bayesian Theory, New York: John Wiley. 25. Arrow, K. J., 1953. “Le rôle des valeurs boursières pour la repartition la meilleure des risques.” Econometrie, 40: 41–47, translated in 1964, Review of Economic Studies, Vol. 31: 91–6. 26. Debreu, G., 1959. Theory of Value: An axiomatic analysis of economic equilibrium. New Haven: Yale University Press. 27. Becker, G. S., 1996. Accounting for Tastes, Cambridge, Massachusetts: Harvard University Press. 28. Samuelson, P. A., 1965. Proof that Properly Anticipated Prices Fluctuate Randomly, Industrial Management Review, 6: 41–49. 29. Lo, A. W., and C. MacKinlay, 1999. A Non-Random Walk down Wall Street, Princeton: Princeton University Press. 30. Lucas, R. E., Jr., 1972. “Expectations and the neutrality of money.” Journal of Economic Theory, 4: 103–124. 31. Hayek, F. V., 1945. “The Use of Knowledge in Society.” American Economic Review, 35: 519–30. 32. Sonnenschein, H., 1974. “Market excess demand functions.” Econometrica, 40: 549–563. 33. Benhabib, J., and R. Farmer, 1999. Indeterminacy and Sunspots in Macroeconomics, In: J. Taylor and M. Woodford. (eds.), Handbook for Macro Economics, New York, North-Holland, vol. 1A, 387–448.
124 SUSTAINABILITY AND ECONOMICS
34. Schelling, T., 1960. The Strategy of Conflict. Cambridge, Massa-chusetts: Harvard University Press. 35. Blinder, A. S., E. R. D. Canetti, D. E. Lebow, J. R. Rudd, 1989. Asking About Prices: A New Approach to Understanding Price Stickiness. New York: Russell Sage Foundation. 36. Rodrik, D., 2000. “How Far Will International Economic Integration Go? Some wild speculation on the future of the world economy.” Journal of Economic Perspectives, 14: 177–186. 37. Drucker, P., 1976. The Unseen Revolution: How Pension Fund Socialism Arrived to America, New York: Harper & Row.
7 Scales in Economic Theory ANNE VAN DER VEEN AND HENRIËTTE OTTER University of Twente, The Netherlands
Introduction One of the underrated topics in economics is the issue of scale and aggregation. To be more precise, in regional economics spatial scale and spatial aggregation is a neglected item. This statement might sound a little bit strange in a world where transportation economics, regional economics and urban economics are well-established fields. However, it is our belief that in defining an observation set in order to understand the arrangement of spatial patterns and structures economists are poorly equipped. Economists have devoted more attention to the scale of time than to the scale of space. In addition, what has been done in the field of space is often general and abstract, not connected to an explicit observation set in time and space. Finally, it is our perception that in economic theory time scales and spatial scales are not tied, making the choice for a macro, meso or microeconomic theory a rather arbitrary process. We cannot handle these critical remarks all at the same time so we will restrict ourselves in order to illustrate our point of view. In this article, we will devote attention to the explanation of the phenomenon of emerging [1] spatial structures [2]. We will discuss the standard theories that describe the underlying processes and argue that by being more explicit about spatial scales explanatory power is added to current theoretical work. Given these introductory remarks on time, space and aggregation we will first pay attention to the choice of scales and aggregation levels in general. The issue of (spatial) aggregation as an almost insurmountable step will be discussed in some detail. Secondly, we devote a special section to ecology. We recently experienced that in ecology a discussion has taken place on exactly the same topic as we present here and we are convinced that by reviewing their findings on time and space, and especially their conclusions on aggregation, we can learn. Moreover, as an example, we will measure how location theory, as the heart of regional economic theory, is influenced by scaling. We evaluate how spatial resolution is handled in location theory and
126 SCALES IN ECONOMIC THEORY
discuss how defining the problem in terms of spatial resolution might contribute to a better understanding of the phenomenon of emerging spatial patterns. Finally, we devote a section to the consequences for government in the design of spatial policy.
Scales and Aggregation Models are abstract maps of empirical reality around us. Examples of these representations of reality are mental models, mathematical models, simulation models, physical scale models, etc. A binding element in all is that we aim to frame theories and ideas to better understand the empirical chaos. In every model, a choice has to be made on scales. Choosing a scale on which to project the objects and processes in a model refers to a quantitative and analytical dimension and to time and space [3,4]. Concerning these scales we may further discriminate between resolution and extent. For resolution in temporal and spatial scales we thus define a: ■ ■
time step (e.g., a day), and a spatial step (e.g., a grid of 100 by 100 meter).
For extent we can distinguish between: ■ ■
the extent of time (e.g., a year), and the spatial extent (e.g., a country).
As an example of the distinction above, abstract neo-classical models in economics have low temporal and spatial resolution. Moreover, they have relatively high extent in time and space. Large national-regional econometric models, on the other hand, may have a higher spatial resolution and consider a smaller extent in time. Besides quantitative and analytical dimensions and time and space, there is another concept to introduce and that is level. It is defined as the unit of analysis along a scale [3]. Economists prefer to speak about aggregation level. Level follows from systematically making choices on time and spatial scale (and thus on resolution and extent) and on quantitative and analytical dimensions. Before continuing with scales in economics, we make a remark on the aggregation process in economics. In economics, we do not have direct data at the coarse scale. Rather data at the coarse scale are aggregates used for macro or meso economic analyses. Two types of such economic aggregates can be distinguished: aggregate quantities and aggregate agents [5]. Relationships between aggregate macroeconomic quantities can be derived from [6]: ■ ■ ■
a macro theory, e.g., the Harrod and Domar model, a method based on analogies from micro behaviour, or an aggregation of micro relations based on micro characteristics.
SCALING IN INTEGRATED ASSESSMENT 127
A macro theory under (1) always has more or less an ad hoc character. It is based on rigorous hypotheses on relations between aggregate variables and is not related to any micro behaviour. The analogy method under (2) is followed in consumption and production theory. Studies in this field start with an elaborated theory of individual behaviour, but they are also assumed to hold for per capita data of totals. However, as Van Daal and Merkies [6] note, “Usually any argument in defence of this jump in the train of thoughts is lacking”. More firmly, Malinvaud, in Harcourt [7], states the following about the microeconomic foundations of macroeconomics: “Aggregation was hardly ever justified, except in rather narrow cases, which were not often found in fact. Most of the times our macro economic theory therefore lacked the rigorous justification that we should like to find in micro-economic analysis”. The implication of these arguments thus is that forming an observation set in mesoand macro-economics on basis of the analogy method (a representative agent) is a critical process. For (3) a consistent aggregation procedure has to be followed. This procedure is related to what in natural sciences is called up-scaling and downscaling [3]. However, as noted by Costanza et al. [4] such an aggregation procedure is far from trivial in complex, non-linear discontinuous systems. Indeed, Forni and Lippi [8] argue that macroeconomic modelling and testing would receive a new impetus if a better balance were reached between micro theory, aggregation theory, and empirical research on the distribution of the micro parameters over the population. Consequently, more importance would be given to heterogeneity on a micro level. In spatial economics, there are even more perplexing aggregation problems. Whereas census data are collected for essentially non-modifiable entities (people, households) they are reported for arbitrary and modifiable areal units (enumeration districts, local authorities etc). This is the crux of the modifiable areal unit problem: there are a large number of different spatial objects that can be defined and few, if any, non-modifiable units [9]. The conclusion from the discussion in this section on scales and aggregation is that building an observation set in time and space on a certain aggregation level is far from a simple process. More strongly, by making mistakes or misjudgements in the design of our observation set we make misjudgements in the understanding of the processes we wish to describe. Before continuing a discussion on building our observation set, we review a recent dispute in the discipline of ecology on scales. Given the definition of ecology in the next section we see a certain analogy with spatial economics. There is an identical problem in identifying aggregation levels in relation to an observation set in time and space.
128 SCALES IN ECONOMIC THEORY
Space and Aggregation in Ecology Ecology attempts to explain the relationship between living organisms and their surroundings. Ecology is about the distribution and abundance of different types of organisms over the face of the earth, and about the physical, chemical but especially the biological features and interactions that determine these distributions and abundances [10]. In ecology there are supposed to exist several dependent (bio) diversities at different aggregation (organisational) levels. Processes can, for instance, take place in the biosphere, but also on ecosystem, community, population and individual species level. (Note the analogy with micro, meso and macroeconomics.) In ecology, space and time are linked. Ecological processes that operate over large areas also tend to operate over long time scales. Modern ecology has focused mainly on those scales where local communities and short time periods are studied [11]. Thus, processes are simulated at short time scales and treated entirely as recursive; consequently high time resolution models are adopted [12]. Secondly, ecologists are interested in long time horizons and especially the long-term implication of human action [13]. Spatial dynamics are extremely important in ecology. Besides the physical flows of matter, the spatial arrangement of habitats or land cover affect all ecological processes such as species diversity, natural assimilative capacity and nutrient cycling. The spatial pattern of habitats or land cover, the landscape pattern, is thus linked with all ecological processes. Furthermore, the size and shape of the patterns themselves depend on the scale on which they are described. These notions have led to the development of hierarchy theory [14], which states that the variation that is observed in ecosystems depends on the scale over which we measure it, both in time and in space. Within such a hierarchy we observe: ■ ■ ■ ■
Processes, Flows, Interactions, and Rates (which characterise the speed of change in the system).
Variables and processes on lower level in the hierarchy are considered as noise, whereas variables on higher level act as constraints. Rates appear to be a kind of distinctive variable in relation to hierarchies. “High” levels show slow rates and “low” levels show fast rates. The notions on hierarchy and scale as presented above have been shaken somewhat by authors who discuss the relation between level and scale. An important first observation by O’Neill and King [14] is that hierarchies are less evident than they look because moving across scales the dominant processes may suddenly change and relationships may completely disappear.
SCALING IN INTEGRATED ASSESSMENT 129
Moreover, within an ecological observation set, processes may be located at different levels by finding breaks or discontinuities in the data. Otherwise stated, discontinuities in the ecological data may suggest a change in level of organisation. The question that is being raised is whether these levels of organisation, as extracted from empirical data, are the same as adopted in traditional biological literature: organism, population, landscape, ecosystem, etc. Significantly, ecologists admit that they have confused the words scale and level [14]. This implies, for instance, that the application of the word scale in ‘landscape scale’ is wrong. Landscape is a level of organisation. There is a relation between scale and level, but changing the scale of observation changes the observation set. Consequently, the hierarchical organisation can change or disappear. Allen [15] takes an even harder position: Landscape is a “type” as the researcher constructs it and it is thus an organisation level that is not scalar. Type-based levels of organisation contrast with scale based levels, which are rooted in observations. Higher levels of observation are materially larger, whereas levels of organisation cannot be assigned to any particular spatiotemporal size. Consequently, landscape is a model, a choice in an analytical dimension. A second observation by O’Neill and King [14] is that hierarchies, as established by ecological theory, are rather arbitrary. The authors do, however, like to keep the idea of hierarchies, but these concepts should be sustained by observations and should not be merely heuristic in the sense of explaining very special problems. From the above experiences in ecology, we firstly infer that in ecology time and spatial scales are connected. Secondly, we conclude that in ecology there is a relation between scale and aggregation level, and that changing the scale of observation also changes the observation set. More strongly, even the hierarchical organisation can change or disappear. Having gone through the general discussion on scales and through the particular application in ecology the question may arise: “How about scales in economics”? Is there a kind of hierarchy in economics sustained by observations? Or is the distinction in micro, meso and macro a type based level characterisation of organisation, which is a rather arbitrary decision, made by the scientific economic community?
Space and Aggregation in Economics Economics is concerned with human behaviour. It studies the allocation of scarce resources to different means. Producers aim to maximise profits and thus minimise costs, while choosing a certain technology, where labour and capital are combined. Consumers aim to optimise their utility given their income and the relative prices of different goods.
130 SCALES IN ECONOMIC THEORY
Economics is thus concerned with choice and value. Three main levels along the scale of analytical interest are distinguished, the micro, meso and macro level. Each level of aggregation has its own theoretical content. Microeconomics studies consumer and producer behaviour, meso-economics focuses on sectors, while macroeconomics focuses on aggregates, aggregate behaviour and government policy. Analogous to ecology, the processes, interactions, flows and rates in economics distinguish organisational levels. Higher levels have slower rates (e.g., inflation), and different levels show different interactions and processes. In Figure 7.1 we give an example of processes and interactions for three levels of analytical interest. In the figure, we bring in a traditional ‘natural’ economic order to resemble traditional thinking in micro, meso and macroeconomics. The grey part in the figure represents a dynamic area, where interactions, flows and rates are relevant. Outside the grey part processes and analytical concepts are not relevant. Above a certain organisational level information acts as constraint and below a level information is supposed to be noise. For example, on the micro level price setting formation is given. On the meso level inflation will act as a constraint whereas individual producer maximisation is noise. Horizontally seen, for sectoral agents, behaviour of individual consumers and producers is noise, whereas the behaviour of aggregate agents is given. Note that in this reasoning, there is no explicit reference to space.
Processes Macro
Inflation
Meso
Markets and price formation
Utility/Prod. Optimization
Micro Consumers & Producers
Sectors & Groups
Aggregates and representative agents
Analytical Scale
Figure 7.1: Processes, interactions, analytical scale and traditional hierarchy in economics.
Given the processes and the analytical scale domain we distinguish, what are the time and space dimensions in economics? Moreover, what is the observation set? Here we notice an important difference between economics and natural science in general and ecology in particular. Economic theory is
SCALING IN INTEGRATED ASSESSMENT 131
based on abstract social units. It is inter alia focussed on utility optimisation of households and price formation in markets. The consequence is that economic theory is not spatially explicit [12] in terms of spatial resolution. Economists might research yearly changes in expenditure on housing for households in the Netherlands as influenced by changes in female labour force participation over a period of ten years. Or they might investigate changes in the quantity of steel sold by industry in the year 1999 in Portugal as a function of changes in GNP in Portugal. Or changes in Gross Regional Product in a time series for states in the USA. Or yearly changes in the demand for water in the UK because of privatisation. To some degree, spatial extent and spatial resolution seem to coincide. Economic research on consumer and producer behaviour on the basis of individual data is not performed on or restricted to a local or regional level. In addition, sectoral observations can be collected at a local, regional and national level. Does this imply that space does not matter in economics? No, space does matter; however, space is generally translated into transportation costs [16], and thus into prices, by the one-dimensional concept of distance. Thus, spatial differences come back in another fashion. But, note again that the resolution of space is not important. Thus, we conclude that spatial resolution, as part of the concept of spatial scale is not taken into account in economics. Economic theory is on abstract social units. Concerning the related problem of aggregation, it is our observation that the ‘traditional’ division in micro, meso and macroeconomics does not have an explicit spatial connotation. As a corollary, the organisational division in economics in micro, meso and macro is a rather abstract distinction, a type based construct, as ecologists would call it. Spatial Resolution and Emerging Patterns of Location Behaviour Does spatial resolution matter in scientific disciplines that deal with space? Above we reached the conclusion that distance as a one-dimensional concept of space does matter, but we did not investigate two- and three-dimensional spatial issues in economics. Are there any applications in spatial sciences where spatial resolution is of importance? Of course there are; in agricultural economics crop results depend on technology as well as on soil conditions, climate and hydrology. In regional economics, inter alia, locational decisions made by households and by firms are spatially dependent. In land markets, land use and land cover change are at stake. Moreover, in (economic) geography, we are interested in differences between regions and countries and we try to understand the formation of patterns.
132 SCALES IN ECONOMIC THEORY
Yet, we are not impressed how spatial resolution is introduced in these disciplines. We will illustrate this statement by presenting standard theories on emerging spatial structures in regional economics and in geography. The evident example for emerging structures is that of location behaviour of firms and households in producing urban spatial patterns. We will evaluate how spatial resolution is handled in location theory and discuss how defining the problem in terms of spatial resolution might contribute to a better understanding of the phenomenon of emerging spatial patterns. In presenting theories on location behaviour, we will split between geography and regional economics as they approach location behaviour from different angles. Geography Geography focuses on where things are located and why [17]. Location, maps and distribution help to answer the where question. The why question is addressed by researching the ability of people to adjust to their physical environment. Scale is of utmost importance in geography. Spatial scale (resolution and extent) is recognised in geography as the main mechanism whereby patterns can be analysed and explained. Geographical Information Systems (GIS) is the essential tool to solve this why and where question [18]. It is our belief, however, that geographers aim at merely a description of the adjustment process that comes with location behaviour. This means that the choice for a certain resolution is not decisive in explaining emergent patterns. A good example is the work on land use dynamics. The high spatial resolution model of urban land-use dynamics developed by White and Engelen [19] aims to capture the spatial complexity of urban and regional areas, by making use of two basic techniques, cellular automata and GIS. Cellular automata use a set of transition rules that govern the local behaviour at each cell with respect to the cell’s neighbours and its own characteristics. It offers a means to study emergent global behaviour in systems where only local processes are understood. By applying this technique of cellular automata, GIS is converted into a dynamic tool [20, 21]. The model of White and Engelen [19] for instance distinguishes between two levels, a macro and a micro level. The macro level includes a modelling framework, which integrates several component submodels representing the natural, social and economic subsystems. The micro level is developed on a cellular array in which the land use changes are calculated through transition rules [19, 22]. However, a drawback of cellular automata in general is the fact that the transition rules are not necessarily reproducible with an objective empirical methodology. The system performance depends highly on the skill of the modeller. Secondly, transition rules do not change during the course of a simulation and hence may be of limited importance because changes of landscape rarely are constant over time. Finally, a most important drawback
SCALING IN INTEGRATED ASSESSMENT 133
is the difficulty of incorporating micro-economic behaviour. Geographers are relatively poor in formulating theories explaining behaviour in space. Indeed, Openshaw and Abrahart [23: p380] argue that ‘ human systems modelling is going to become an unavoidable area of considerable practical importance. People are too important to ignore. Currently, we have no good or even tolerably poor models of the behaviour of people’. Our conclusion is that geographers, although they combine high spatial resolutions with GIS, do not succeed in explaining emergent location behaviour.1 Regional economics In elucidating the role of spatial resolution in regional economics, we again discuss location theory and the appearance of spatial patterns and structures. Location theory and spatial patterns In location theory, a distinction is made between location theories of the firm, location theories of households and the interaction between the two. In the literature on location of the firm, transportation costs (as an estimate of the notion of space and distance) are central to location choice. Here we may distinguish between models that assume a demand for goods and services continuously dispersed in space and models where demand is concentrated in one point [24]. The first type of model suggests [25] geographical patterns of firm location that are hierarchical ordered, whereas the second type presents structures that are dependent on the (point) location of markets and resources. Anas et al. [26] note that defining clusters in space is not so easy. The distinction between an organized system of subcenters and apparently unorganized urban sprawl depends very much on the spatial scale of observation. Here we find one of the very few remarks economists devote to the problem of spatial resolution. For the explanation of agrarian land use the famous Von Thünen model [27] is important. The model has been criticised for the assumptions that production takes place around an isolated market and that soils are of constant fertility. Nevertheless his distance-cost relationship has become the basis of urban location theory. Some claim that Von Thünen’s approach has dominated the thinking about location exactly because of its simplicity and predictive ability [28]. In using an urban location model linked to Von Thünen’s theory, Alonso [29] developed a model that can be regarded as the basis for household location choice. Alonso’s approach is based on the principle that rents decrease outward from the centre of a city (lower revenue, higher operating costs and transportation costs). Rent gradients consist of a series of bid-rents, which compensate for falling revenue and higher operating costs. Different land uses 1
One of the reasons might be the intrinsic data problem geographers and economists have regarding the Modifiable Areal Unit Problem, as discussed above.
134 SCALES IN ECONOMIC THEORY
have different rent gradients, the use with the highest gradient prevailing. Competitive bidding (perfect information) determines patterns of rent and allocates specific sites between users to ensure that the highest and best use is obtained. Land is used in the most appropriate way and profit is maximised. Criticisms to Alonso’s model are first of all that in reality information is incomplete; thus there is an imperfect market. He also fails to take into account the distinctive nature of buildings and their use, which are not easily changed (lock-in). Other points are the heterogeneity of property, public sector land and spillover effects of other uses. The Alonso model and the literature based on it are characterised by other simplistic assumptions. Employment is centralised in the Central Business District (CBD), there is a dense radial road system and all households have the same taste [30]. Moreover, the model is static. Some of these assumptions have been removed [31, 32, 33, 34, 35], but the theories remain rather general and abstract. Note that we did not refer to spatial resolution in interpreting Alonso type urban location models. By only applying distance as the one-dimensional concept of space, the location theories of Von Thünen and Alonso are not able to explain the complex spatial structures that we encounter. Anas et al. [26] discuss this problem by referring to alternative assumptions for the Pareto equilibrium of monocentric cities that make a uniform distribution unstable. Spatial inhomogeneities, internal scale economies, external scale economies and imperfect competition create polycentric agglomerations. In regional economic theory external economies of scale, agglomeration economies, or localisation economies are used as theoretical constructs explaining why firms locate in each other’s vicinity to arrive at increasing returns to scale [36, 37]. Business firms locate in each other’s vicinity in order to gain from the attractiveness of companies of the same type activity, but also to gain from the general atmosphere in such a region. These notions are regarded as a major contribution to economic theory [1]. However, the theoretical constructs remain more or less a black box failing to explain the occurrence of spatial structures and patterns on a high level of spatial resolution. It is here that non-economic explanations have much more to offer in order to interpret the black box [2, 38]. Self-organising criticality and synergetics produce organised structures with which polycentric cities might be better explained. In these theories interactions between individual actors on a high level of spatial resolution give rise to meso and macro spatial structures. Anas et al. [26] therefore plea for an adaptation of standard economic theory. An explanation on a high level of spatial resolution is available where traditional economic theories seem to fail. Interaction between individual actors on high spatial resolution is, however, not in the heart of regional economic theory.
SCALING IN INTEGRATED ASSESSMENT 135
Spatial econometrics Before concluding that in traditional economics spatial scale is not taken into account, we would like to devote attention to a special branch in spatial economics that is related to special techniques caused by the features of space: spatial econometrics. In the same way as we discussed the combination of GIS and geography in the first section, it might be the case that the combination of spatial econometrics and spatial economics produces a powerful explanation for spatial behaviour. Spatial econometrics is concerned with techniques that deal with the peculiarities caused by space [39]. It deals with spatial dependencies and with spatial heterogeneity. According to Anselin and Florax [39] spatial dependency is relevant in two cases: ■
■
In case of a spatial structure underlying spatial correlation, where the main interest is the spatial interaction behind the variable of interest. Spatial dependence between ignored variables in the model as reflected in the error terms.
In neglecting these cases the estimation of an a priori specified model, based on observations for a finite set of spatial units, will cause a number of problems [40]: ■
■
■
■ ■
The modifiable areal unit problem [9], which concerns the aggregation of observations over space. Border or edge problems, pertaining to the problem that inferences are based on a finite set of observations whereas the spatial process extends to spatial units not represented in the data set. Specification of the spatial interaction structure, which is typically represented by a spatial weight matrix. Testing for spatial effects by means of spatial association or correlation. Estimation of spatial models for which adjusted estimators are needed.
The underpinning of spatial dependencies and heterogeneities in regional economics is based on the same ideas as we try to develop in this paper. Anselin and Florax [39: p5] state that there is a ‘renewed interest for the role of space and spatial interaction in social science theory. In mainstream economic theory this is reflected in the interest in the new economic geography’. It is our judgement, however, that spatial econometrics is mainly interested in the statistical and econometric problems of spatial dependencies and not so much in the extension of the theory of economic behaviour with a spatial context and component.2 It is our view that spatial complexity should acknowledge space as a context for decisions made by individual households and firms. We conclude that the combination of spatial econometrics and spatial economics does not produce an additional explanation for spatial behaviour. 2
But there are exceptions: Dubin [41] presents a wonderful paper of a logit model incorporating spatial dependencies on a GIS grid base!
136 SCALES IN ECONOMIC THEORY
Conclusion In discussing scaling and aggregation in (regional) economics, it is our first observation that the construction of an observation set may have strong limitations in relation to the spatial theoretical notions that are assumed. Secondly, the ‘traditional’ division between micro, meso and macroeconomics does not seem have an explicit spatial connotation. Thirdly, in standard economic theory spatial extent and spatial resolution seem to coincide. Considering spatial resolution and human behaviour in regional economic theory, it seems as though there is a trade-off between two topics. Certain types of models are capable of capturing the spatial complexity of urban and regional areas, for instance, by using cellular automata. These models have a high spatial resolution, but do not include choices made by individuals. On the other hand, current static and dynamic location models on the other hand do not guarantee a high spatial resolution. It is here that future researchers should concentrate their efforts.
References 1.
Krugman, P. R., 1996. The Self-Organizing Economy. Cambridge: Blackwell Publishers. 2. Otter, H. S., 2000. Complex adaptive land use system: an interdisciplinary approach with agent-based models. Uitgeverij Eburon, Delft, The Netherlands. 3. Gibson, C. C., E. Ostrom and T. K. Ahn, 2000. “The concept of scale and the human dimensions of global change: a survey.” Ecological Economics, 32: 217–239. 4. Costanza, R., L. Wainger, C. Folke and K. G. Mäler, 1993. “Modeling Complex Ecological Economic Systems.” BioScience, 43: 545–555. 5. Schlicht, E., 1985. Isolation and Aggregation in Economics. Berlin: Springer Verlag. 6. Van Daal, J., and A. H. Q. M. Merkies, 1985. Aggregation in economic research; from individual to macro relations., D. Reidel Publishing Company, Kluwer. 7. Harcourt, G. C. (ed.), 1977. The microeconomic foundations of macroeconomics, London: The Macmillan Press. 8. Forni, M., and M. Lippi, 1997. Aggregation and the Microfoundations of dynamic Macroeconomics. Oxford: Clarendon press. 9. Openshaw, S., 1983. The modifiable areal unit problem, Concepts and techniques in modern geography. Norwich: Geo Books. 10. Begon, M., J. L. Harper and C. R. Townsend, 1996. Ecology. Oxford: Blackwell Science.
SCALING IN INTEGRATED ASSESSMENT 137
11. Beeby, A., and A. M. Brennan, 1997. First Ecology. London: Chapman & Hall. 12. Bockstael, N., 1996. “Modeling Economics and Ecology: The Importance of a Spatial Perspective.” American Journal of Agricultural Economics, 78: 1168–1180. 13. Bockstael, N., R. Costanza, I. Strand, W. Boyton, K. Bell and L. Wainger, 1995. “Ecological Economic Modeling and Valuation of Ecosystems.” Ecological Economics 14: 143–159. 14. O’Neill, R. V., and A. W. King, 1998. Homage to St. Michael; or, why are there so many books on scale? In: Ecological scale: Theory and application. D. L. Peterson and V. Th. Parker (eds.). New York: Columbia University Press: 3–15. 15. Allen, T. F. H., 1998. The landscape is dead. In: Ecological scale: Theory and application. D. L. Peterson and V. Th. Parker (eds.). New York: Columbia University Press. 16. Krugman, P. R., 1993. “On the relationship between trade theory and location theory.” Review of International Economics, 1: 110–122. 17. Rubenstein, J. M., 1989. An introduction to Human geography. New York: MacMillan. 18. Martin, D., 1996. Geographic Information Systems: Socioeconomic Applications. London: Routledge. 19. White, R., and G. Engelen, 1993. “Cellular automata and fractal urban form: a cellular modelling approach to the evolution of urban land-use patterns.” Environment and Planning A, 25: 1175–99. 20. Tobler, W., 1979. Cellular geography In: Philosophy in Geography. S. Gale and G. Olsson (eds.). Dordrecht: D. Reidel Publishing Company: 379–86. 21. Couclelis, H., 1985. “Cellular Worlds: A Framework for Modeling Micro-Macro Dynamics.” Environment and Planning A, 17: 585–596. 22. Engelen, G., R. White, I. Uljee and P. Drazan, 1995. “Using Cellular Automata for Integrated Modelling of Socio-Environmental Systems.” Environmental Monitoring and Assessment, 34: 203–214. 23. Openshaw, S., and R. J. Abrahart, 2000. Geocomputing. London: Taylor and Francis. 24. Lloyd, P. E., and P. Dicken, 1977. Location in space. London: Harper and Row. 25. Christaller, W., 1933. Die Zentrale Orte in SüdDeutschland. Eine ökonomisch-geografische Untersuchung über die Gesatzmässigkeit der Verbreitung und Entwicklung der Siedlungen mit Städtischen Funktionen. Jena: Gustav Fischer. 26. Anas, A., R. Arnott and K. A. Small, 1998. “Urban spatial structure.” Journal of Economic Literature vol XXXVI: 1426–1464. 27. Von Thünen, J. H., 1826. Der Isolierte Staat. Used Edition: Von Thünen’s Isolated State; An English Edition of Der Isolierte Staat. P. Hall (ed.). Oxford: Pergamon Press.
138 SCALES IN ECONOMIC THEORY
28. Vickerman, R. W., 1980. The Microeconomic Foundations of Urban and Transport Economics. London: MacMillan Press. 29. Alonso, W., 1960. “A Theory of the Urban Land Market.” Papers and Proceedings of the Regional Science Association, 6: 149–157. 30. Richardson, H. W., K. J. Button, P. Nijkamp, and H. Park, 1997. Analytical Urban Economics, Modern Classics in Regional Science. Cheltenham: Edward Elgar. 31. Pines, D., 1976. Dynamic Aspects of Land Use Patterns in a Growing City In: Mathematical Land Use Theory. G. J. Papageorgiou: Lexington Books: 229–243. 32. White, M. J., 1976. “Firm Suburbanisation and Urban Subcenterss.” Journal of Urban Economics, 3: 323–343. 33. Fujita, M., 1976. “Spatial Patterns of Urban Growth: Optimum and Marke.” Journal of Urban Economics, 3: 209–241. 34. Fujita, M., 1989. Urban economic theory; Land use and city size. Cambridge: Cambridge University Press. 35. Papageorgiou, Y. Y., and D. Pines, 1999. An essay on urban economic theory. Dordrecht: Kluwer. 36. Marshall, A., 1890. Principles of economics. London: MacMillan. 37. Lambooy, J. G., 1998. Agglomeratievoordelen en ruimtelijke ontwikkeling (Agglomeration advantages and spatial development). Oratie. Utrecht: Universiteit Utrecht. 38. Allen, P. M., 1997. Cities and Regions as Self-Organising Systems; Models of Complexity. Amsterdam: Gordon and Breach Science. 39. Anselin, L., and R. J. G. M. Florax, 1995. New directions in spatial econometrics. Berlin: Springer Verlag. 40. Florax, R. J. G. M., and S. Rey, 1995. The impacts of misspecified spatial interaction in linear regression models. In: New directions in spatial economterics. L. Anselin and R. J. G. M. Florax (eds.). Berlin: Springer Verlag. 41. Dubin, R., 1995. Estimating logit models with spatial dependence. In: New directions in spatial econometrics. L. Anselin and R. J. G. M. Florax (eds.). Berlin: Springer Verlag.
8 Scaling Methods in Regional Integrated Assessments: from Points Upward and from Global Models Downwards 1
1
2
T. E. DOWNING , R. E. BUTTERFIELD , M. BINDI , 3 4 5 R. J. BROOKS , T. R. CARTER , R. DELÉCOLLE , 6 1 7 8 Z. S. HARNOS , P. A. HARRISON , A. IGLESIAS , M. NEW , 9 10 11 12 S. MOSS , J. E. OLESEN , J. L.ORR , J. PORTER , 13 14 M. A. SEMENOV AND J. WOLF 1 Environmental Change Institute, University of Oxford, Oxford, UK 2 DISAT, University of Florence, Florence, Italy 3 IACR Long Ashton Research Station, University of Bristol, Bristol, UK 4 Finnish Environment Institute, Helsinki, Finland 5 INRA – Unité de Bioclimatologie, Avignon, France 6 Department of Mathematics and Informatics, University of Horticulture and Food Industry, Budapest, Hungary 7 Escuela Tecnica Superior de Ingenieros Agronomos, Cuidad Universitaria, Madrid, Spain 8 School of Geography and the Environment, University of Oxford, Oxford, UK 9 Centre for Policy Modelling, Manchester Metropolitan University, Manchester, UK 10 Department of Crop Physiology & Soil Science, DIAS, Research Centre Foulum, Tjele, Denmark 11 Scottish Natural Heritage, Edinburgh, Scotland 12 Department of Agricultural Sciences, Royal Veterinary and Agricultural University, Taastrup, Denmark 13 IACR Long Ashton Research Station, University of Bristol, Bristol, UK 14 Department of Theoretical Production Ecology, Wageningen Agricultural University, The Netherlands
140 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
Acknowledgements This article draws upon the CLIVARA final report (see Butterfield et al. [1]), an EU project on climate change and agriculture (ENV4-CT95-0154).
Introduction All models contain a simplification of the spatial or temporal scale of the real system, either by averaging or aggregating small scale elements or by treating large scale changes as a constant [2, 3, 4]. Three examples illustrate the importance of scale in climate change impact assessment. First, understanding of crop-climate modelling at the site scale is far more profound than is readily captured in spatial, regional models. Yet, spatial shifts in agricultural potential, demand for water, use of fertiliser and competitiveness are more profound for agricultural systems than point estimates of changes in potential yield. Second, climate change captured in low resolution global climate models may not relate to the sensitivity of local climate impacts. Changes in extreme events, such as ground frost, persistent drought, and ocean-atmosphere anomalies, are poorly represented in existing models. Yet, such changes are likely to be more significant than gradual changes in means. Third, understanding adaptation requires a new breed of climate change impact assessment – one that portrays realistic decision making, environmental, economic and social signals, and thresholds for action [5]. Scaling between the cognition of decision agents and their broader environments will be a considerable challenge. This paper primarily focuses on methodologies required to address the first of these demands for understanding scale in integrated assessment – scaling up impacts. We also provide an entrée into the literature on downscaling climate. The discussion introduces issues in scaling agents – a critical aspect of agent-based simulation. In many respects, these issues are part of the arcane toolkit of impacts modeling. However, resolving the scale issue can make a difference (see for example Easterling et al. [6], Mearns et al. [7, 8]). The CLIVARA project undertook a European-scale assessment of climate change and potential crop production, using high resolution, multi-scale crop-climate models. On two occasions we have (subjectively) compared our results with the results from single-scale spatial models used in global assessments. We have the diverging results – the global models predict major adverse impacts of climate change whereas our process-oriented understanding shows widespread benefits and few adverse impacts on potential crop production. However, these comparisons
SCALING IN INTEGRATED ASSESSMENT 141
are simply visual – further comparison of impacts models remains an urgent agenda for research. Scaling Up Impacts This section provides examples of different approaches to scaling up, drawn from the experience of the Climate Change, Climatic Variability and Agriculture in Europe (CLIVARA) project [9]. The choice of scale is part of the process of formulating a model [10]. For some physical models strong spatial interactions between the principal model variables, such as air movement between neighbouring grid cells, guide the choice of an appropriate spatial scale. Such spatial interactions are limited in agriculture (e.g., competition between plants for water, pests), although scaling is still present in crop models, for example, by modelling the average conditions over a small area and by using average daily weather rather than trying to reproduce the changes throughout the day. The techniques described here apply to common methods in climate change impact assessment, specifically crop-climate modelling. The focus is on crop phenology and yield. We assume nutrients will be applied and pests controlled. Scaling up all of the factors affecting crop production (e.g., agronomic management) and agricultural systems (e.g., costs and prices) is more difficult. Methods for bridging scales range from using available climate stations to forcing all data, processes and output to a uniform gridscale and multi-level approaches that embed reduced form or emulation models (e.g., Polsky and Easterling [11]). One aspect of the scaling-up has been to determine the optimum amount of input data and model runs. that are needed to represent the responses to environmental change in a ‘region’. Several studies have discussed this issue, in particular the regional study on Central England [12] and the Danish country study [13]. Five approaches for representing variability across space are illustrated schematically in Figure 8.1 (see Downing et al. [3]). Site-driven approaches (a in Fig. 8.1) begin with station data. In contrast, raster grids for the basis for most spatial modeling (b), although much data is held in polygons rather than grids (d). The two forms can be combined in various ways (c, d and e). Site-driven approaches A common approach in determining regional yields is to identify sites and soils that can ‘represent’ that region. Such an approach is generally based on soil polygons for natural resource data. If soil units cover a large geographic region, they may be intersected with an agroclimatic index, such as aridity or agroecological zone, topography (often a proxy for temperature), or agronomic management regions e.g., known land use or irrigated areas.
142 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
(a)
(d)
(b)
(c)
(e)
Figure 8.1: Approaches for representing spatial variability in models: (a) site driven (site/polygon and multiple sites in a polygon); (b) spatial uniform grids; (c) uniform grids with relational soils (spatially combined); (d) spatial interpolation; and (e) stochastic space.
Each resulting polygon is associated with representative climate data. Ideally, this would be a station within the boundaries of the polygon. If this is not available, the climate data may need to be interpolated from stations outside the polygon. Or, more simply, the nearest station is used. In either case, a single site represents the entire polygon. This is the site/polygon approach. A slightly more complex approach is the multiple site/region method. This assumes that several sites can be used to represent the spatial variability within a region. Brooks and Semenov [12] investigated how many sites were required to capture the variability in climatic and soils parameters for modelling climate change effects on winter wheat in central England. Analysis of data at numerous sites in the region indicated that it could be considered as a single climatic region with three main soil types. Empirical relationships were then derived between site predictions from a process-oriented wheat model (Sirius) and observed regional yield statistics for climate change. A different interpretation of the multiple site/region method is illustrated in Davies et al. [14] in a study on the economic response of potatoes to climate change in England and Wales. Here, site-based crop and economic models were applied to 93 meteorological stations in the region for current conditions and several climate change scenarios. Model output variables (e.g., yield, gross margins) were then spatially interpolated from the 93 sites to a 10 km grid across England and Wales. In a case study in Denmark [13], correlations using summer precipitation with 650 precipitation stations identified regions of Denmark, represented by 6 climate stations. Using the maximum correlation scheme the areas that were best correlated to each site were identified.
SCALING IN INTEGRATED ASSESSMENT 143
This approach has the benefit of retaining complex soils data and can readily utilise detailed climate data (e.g., 30-year daily time series of temperature, solar radiation, wind, humidity and rainfall) that would be difficult to grid for a study region. Soils are acknowledged as more important for crop suitability than local differences in climate. If soil units are large, then one climate station may not be representative and the soil units may need to be subdivided by climatic regions. A site-polygon approach was also adopted in the regional studies in Hungary [15]. The main technical drawback to site-driven approaches is the relative complexity of the modelling environment. Creating and accessing databases in different formats requires special facilities not generally provided in a Geographic Information System (GIS) or crop-climate programmes. Even with generic shells, such as AEGIS [16], considerable investment is required to set up a study area and carry out simulations. Even creating a spatial map of the length of growing period requires fully processing the crop model for each soil-climate polygon, summarising the model output, and presenting the resulting statistics for the soil-climate thematic layer in the GIS. However, the approach does preserve the original data formats and integrity. The conceptual limitations concern notions of space as an inherent variable. The model only has to be run once for each soil type and crop management group within each climate polygon. There is no a priori test for choosing the number of soil-climate spaces and the number of climate stations required to represent each polygon. To the extent that long term change alters present environmental relationships, choosing ideal polygons for the present situation does not guarantee an adequate representation of the future. The estimation of the variance of modelled regional yield requires the spatial correlation of simulated yields to be assessed. This can be done for current conditions by running a model for each database grid using observed weather data for a given period. However, climate change scenarios downscaled to sites generally are not spatially correlated and so cannot be used to investigate the future yield covariance structure. This is true for scenarios constructed using a weather generator to produce independent time series of synthetic weather data for each site (but see the spatial weather generator in Semenov and Brooks [17]). Regional climate models are another approach gaining popularity, but at some expense in terms of data handling. Central England case study A study has been conducted using the soil/polygon approach in Central England [12]. The methodology involves predicting regional yield under future climate by scaling-up output from a site-based wheat model. Limited site information is assumed to be available so that the method is applicable in most circumstances; only soil data for the region and detailed weather data at a few sites are required. The predictions of yield assume good management and the absence of pests and diseases with spatial variations in modelled yield therefore being due to differences in weather and soil conditions throughout the region.
144 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
The first step was to investigate the relationships between the input and output variables of the Sirius wheat model by conducting a comprehensive sensitivity assessment. Next, a simpler model was constructed based on an analysis of the inherent relationships within the Sirius model and results from the sensitivity assessment. The model was able to reproduce the Sirius yields closely for a variety of UK conditions. Test data sets with a wide range of yields produced root mean square errors of around 800 kg ha1, compared to standard deviations in the Sirius model of some 2000 kg ha1. Correlations were above 0.90. The methodology consists of identifying the areas within the given region that have similar soil-weather characteristics. Unless the region is very large, contains major topographical features or a vast diversity of soil types, there will only be a few such soil-weather combinations. Predictions of regional yield are then made by combining site-scale Sirius predictions for each soilweather combination with predictions of the inter-site correlation pattern. The mean regional yield is simply defined as the weighted sum of the mean site yields. When data from only a few sites are available the yields across the region must be inferred using this data. In particular, areas that are considered to have similar weather and soil conditions are likely to show a similar change in yield and can be considered as one large site. It was necessary to assess the relationships between the site weather data and the site soils data. A sensitivity analysis on the full mechanistic winter wheat model and resulting simpler model showed that the single important soil variable, under non-nitrogen limiting conditions is the total available water capacity. Different soils can therefore be grouped together if their water capacities are similar. Those areas identified as having similar climates and soils are considered as a single large site in the estimation of the mean and variance of regional yield. This overcomes the need to simulate each individual farm within the region. The mean regional yield is calculated for present and future climatic conditions for each different soil-climate combination. This is undertaken using the Sirius model with synthetic weather data produced by the LARSWG stochastic weather generator (see Barrow et al. [18]). The estimation of the variance of regional yield requires the spatial correlation of the simulated yields to be assessed. The close correspondence of the weather characteristics of the sites means that they can be grouped together. Since the topography of the region is fairly homogeneous, the whole region’s weather can be represented by just one site. There is a slight temperature gradient from north to south and a precipitation gradient from west to east for the three sites. Hence, a centrally situated site would be the most representative. However, the weather database does not contain such a site with observed weather data for 1960–90 and so, instead, the site of Oxford was chosen as it has a complete record and site climate change scenarios are available.
SCALING IN INTEGRATED ASSESSMENT 145
The input parameters used for the three sites were the same and so the differences in synthetic yield are entirely due to the different weather data at the three sites. The analysis of the weather data indicated that the systematic differences were small and so the region could be considered as experiencing a single climate. Furthermore, the random differences were not sufficiently great that they needed to be modelled explicitly. The comparison of yields supports this conclusion with small differences between the sites. The sensitivity analysis showed that small random differences in weather could produce significant differences in yield and, thus, the differences in yield observed here are consistent with small random differences between the sites. The recorded regional yield data only covers the period 1974–1979 and 1983–1995 and will be influenced by a number of factors that are not included in this study, particularly the effects of pests and diseases and changes in management practices and technology. This prevents a meaningful comparison with the mean and standard deviation of yields modelled for central England. However, a limited validation can be made by examining the yield pattern across the region. Actual yields should contain a climate signal and a comparison of regional yields across Britain shows that there are strong correlations between yields of nearby regions. This is consistent with the model analysis in that yield is related to cumulative values of the climate variables and that many of the relationships are approximately linear so that the relationship of yield to climate should not have large discontinuities. The similarity of the climate across Britain means that there should not be large differences in the climate related effects on yield. The methodology produces the most accurate prediction of the regional yield variance by combining the weather generated climate scenarios and the observed climate data. However, this prevents an analysis of the distribution of regional yields or of the statistical significance of the results. An indication of the likely shape of the distribution of regional yield can be obtained from the observed data by multiplying the predicted yields for each year for the three soil types by the soil relative area values to give a simulated regional yield. The lack of data (only 30 values) and the fact that the climate change scenario does not include changes in variability limits the value of such results. The simulated regional yield distributions for the observed data and for the observed data adjusted for the two scenarios of climate change are slightly negatively skewed. The skewness probably results from several of the years being at or close to potential yield so that variations in water deficit between those years have no effect on yield. Uniform grid approach The most common approach to linking GIS and crop models is to convert all of the input data into raster grid databases with uniform pixel size and geographical co-ordinates. The original data need to be interpolated to the
146 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
raster grid. Data with irregular boundaries such as soil data are forced to match the grid. The site-based crop model (or a simplified version) is then applied consecutively to each grid cell in the region. The resolution is generally chosen based on the availability of data, resulting size of database and computing resources. Ideally the resolution would reflect the sensitivity of model results to spatial variation and discontinuities as well as corresponding to the accuracy of the input data. Recent examples of this method are studies that have applied complex site crop models to relatively fine grids at national scales. Carter et al. [19] applied the CERES-Wheat and POTATOS site-based crop models to data held in a network of 3827 grid boxes at a 10 × 10 km resolution across Finland. The application of simplified and complex site models to gridded input data has been deomonstrated by Rounsevell et al. [20]. The European study in CLIVARA also used this approach by applying a simplified crop model to gridded climatological and soil data [21]. Gridded models generally require converting site models to a regional scale model and using aggregated regional inputs. Model parameters and inputs need to be scaled, by running the model throughout the region either by using interpolated inputs or by dividing the region into sub-regions with the same characteristics. At the site level, detailed crop-climate models simulate plant responses to a wide variety of environmental and management changes. These results are used to calibrate parameters in more generic, reduced-form models that run on the spatial data (e.g., EuroWheat at the European scale [21]. Brisson et al. [22] studied large scale spatial variations in maize suitability in France. The main aim was to build a crop model (GOA-Maize) with the minimum of biological detail needed to produce useful information for decision making and which is able to use readily available input data. The model was applied to climatic data on a ten-day time step at a gridded resolution of 20 km2. Brignall and Rounsevell [23] developed a simple model to assess the effects of climate change on winter wheat potential in England and Wales. This classified crop performance into well-suited, moderate, marginal or unsuited based on calculations of machinery work days and drought stress. Two reduced-form models were developed by Wolf [24, 25] and compared with output from complex site-based models under current and future climatic conditions. The POTATOS reduced-form potato model was compared with the NPOTATO site model, whilst the SOYBEANW reduced-form soya bean model was compared with the SOYGRO site model. Results were highly variable showing that the models produced similar responses at some sites and under some climate change scenarios, but quite different results at others. The main advantage of the gridded methods is that it is relatively simple once data have been converted to the grid. The principal disadvantage is the
SCALING IN INTEGRATED ASSESSMENT 147
lost connection between the input data and model processes. The original data is simplified and generalised to fit the grid. For example, daily rainfall is reduced to monthly time series and crop-climate models are simplified to work at this scale. If a very high resolution is chosen, the required computer resources (data storage and processing time) generally limit the number of simulations attempted. Since the original data are not available to the model, testing of model sensitivity and uncertainty may be reduced, especially if each run requires significant computing resources. The disadvantages are especially important in treating soil information. In Figure 8.1B, soil types do not correspond to the imposed grid – some grid cells have more than one soil. Refining the grid to correspond more closely to the soil boundaries is possible, but would dramatically increase the size of the database, as the same information is duplicated for many pixels – there are fewer soils than there are grid cells. Where more than one soil is present in a pixel, often the dominant soil type is gridded and other soils in the grid or associated soil types are not included in further modelling. Alternatively, the best soil is gridded, although the definition of best would vary by crop and season. Another approach is for soil properties (such as water holding capacity) to be averaged, based on their prevalence in each pixel. However, this may result in quite unrealistic input data if the soil types are quite distinct. Some soils will be unsuited for agriculture and can be masked from the database. Input soil databases for each soil can be prepared and models run for every soil type. The output would still have to be aggregated to the pixel level, generally using a weighted average or stochastic dominance criteria. European case study Simplified crop simulation models for wheat, potato and grapevine have been either developed or adapted for application at the continental scale [21]. These reduced-form models approximate the behaviour of complex site-based models, but require less demanding input and calibration data. The models were combined with the use of statistical functions to temporally down scale climatic input variables from the monthly to daily resolution. A GIS was used to run each model across spatial data sets interpolated to a regular 0.5° latitude/longitude grid. Each 0.5° grid cell was assumed to represent a homogeneous region and the model was applied independently to each cell. Results from these models provide a continental overview of the effects of climate change on crop suitability and productivity. The performance of the continental scale models in simulating current regional variations in crop productivity was evaluated against observed agricultural statistics. Ratios for calculating actual yields from simulated water-limited yields were available for wheat and potato [26]. This allowed a comparison of the range of simulated yields in all grids in any country with the observed yields and gave a satisfactory validation of the models.
148 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
The effects of climate change on wheat, potato and grapevine production across Europe was investigated using two climate change scenarios from global climate models (GCMs) for the year 2050. Spatial uncertainties in crop responses, which are attributable to uncertainties in GCM projections of future climate, were also quantified for wheat. The scaling-up method involved the application of reduced-form mechanistic crop models to spatially gridded input datasets. Various statistical functions were explored for temporally downscaling the relevant climatic input variables (minimum and maximum temperatures and solar radiation) from the monthly to daily resolution (Fig. 8.2). 13
40 36 32
R-sq = 0.99 RMSE = 0.1 Bias = -0.31
28
Estimated daily data
Estimated daily data
44 12 11 10
R-sq = 0.93 RMSE = 0.21 Bias = + 0.09
9 8
24 24
28
32
36
40
Observed daily data
44
8
9
10
11
12
13
Observed daily data
Figure 8.2: Comparison of the mean duration of (a) the grain filling period (in days) and (b) mean grain yield (in t ha1) calculated using observed daily data and daily data estimated using a sine curve interpolation routine at 175 sites.
A GIS was used to run each model across the spatial data sets interpolated to a regular 0.5° latitude/longitude grid. Results were aggregated to the country level, providing estimates of the impact of climate change on agroclimatic environments across Europe and a statistical test of significant differences from the present to the scenarios of future yields (Fig. 8.3). Spatially combined: uniform grids with relational soils An alternative to uniform grids is to hold soil data as a relational database. Each grid cell points to a database of soils to look up all the soils that are found in that grid cell. This can provide access to richer soils data. Connections to representative soil profiles for each soil unit can add information for soil layers. Conversely soil data can be gridded and climate overlaid as polygons (as in Olesen et al. [13]).
SCALING IN INTEGRATED ASSESSMENT 149 2.5
-1
C ha nge in wheat yield (t ha )
2.0 1.5 1.0 0.5 0.0 -0.5 -1.0 -1.5
-1
Change in w hea t yield (t ha )
4.0
3.0
2.0
1.0
0.0
UK
Spain
Romania
N ether la nds
Italy
Hunga ry
Ge rma ny
France
Finland
De nmar k
-1.0
Figure 8.3 Change in mean water-limited wheat yield (from the 1961–90 baseline) for ten European countries due to natural climatic variability (noise; left bar; n = 7) and due to climate change by 2050 under the IS92a emissions scenario (REF; middle bar; n = 4) and the IS92d emissions scenario (IS92d; right bar; n = 4). Atmospheric CO2 concentration is: (top) held constant at 334 ppmv; and (bottom) increased to 515 ppmv for the REF scenarios and to 435 ppmv for the IS92d scenarios. Horizontal lines show the maximum, minimum and median estimates for each scenario. Countries where climate change causes a significant (95 percent significance from a two-tailed t-test) change in yield are marked with a blak dot for the median estimate (Source: Harrison et al. [21]).
This approach improves upon uniform grids by facilitating access to complex soils data and eliminating redundant soil data for each pixel. Linking the raster (gridded climate) and vector (soil polygon) data requires some programming and may not be easy to do within standard GIS packages.
150 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
A variation of the grid approaches is to sample the grid. The baseline conditions can be modelled on a uniform grid with a high resolution (provided the underlying data are of sufficient quality). Scenarios of climate (or other) change can then be modelled on the basis of a sample of the baseline grid. Some a priori tests of the model response surfaces should illuminate an optimal grid for sampling. For instance, the slope of the model response (yield) can be mapped and related to the model input data (topography, mean climate, soils). Agroecological regions can be defined based on the model response surface, rather than on the input data. It is likely that the change in model responses (e.g., the difference between baseline yield and yields with climate change) are easier to model and require fewer sample points than actual yields. The results need to be interpolated to the original grid. Two approaches are: ■
■
Distance interpolations: The actual values for the sample results are interpolated based on distance-weighted distributions. Mean anomalies: As is often done for climate time series, the sampled results are expressed as deviations from the baseline average. The anomalies are interpolated to the study area grid, then converted back to actual units.
Butterfield et al. [27] compared the spatially gridded inputs and spatially combined inputs approaches for modelling climate change impacts on wheat in Great Britain. Climatic data and data for the dominant soil type was initially interpolated to a 10 by 10 km grid. The Sirius wheat model was then applied to the 2,840 grid cells in the region. The model was then rerun using the spatially combined inputs approach for 12,924 unique soil/climate polygons in the region. The comparison showed that model output from the two approaches was not statistically different for current climatic conditions. Hence, the authors recommended the gridded input approach for their study as this was more computationally efficient. Brklacich et al. [28] used a similar approach in a study of agricultural land rating in the Mackenzie Basin in northwest Canada. Climate data were gridded at a 10 km resolution and soils data were held in polygons at the 1:1 million scale. To reduce the number of calculations the climate associated with the 10 by 10 km grid cell closest to the centroid of each soil polygon was used. The AEGIS (Agricultural and Environmental Geographic Information System) approach [29] is comparable to that of Brklacich et al. [28], except that a representative meteorological station within each soil polygon is utilised. If a meteorological station is not available within the boundaries of the polygon then the nearest station is used. AEGIS contains the DSSAT suite of site crop models in a PC GIS-based environment and is designed as a regional decision support system for policy making in agriculture. Van Lanen et al. [26] combined three types of data held in polygons to study crop growth potential in the European Union. Here, the authors defined
SCALING IN INTEGRATED ASSESSMENT 151
4,200 Land Evaluation Units (LEUs). Each LEU was a unique combination of a soil unit, a representative meteorological station for each of 109 agroclimatic zones and administrative regions. Qualitative (based on expert knowledge) and quantitative (based on crop simulation models) land evaluation methods were then applied to each LEU to determine the current potential suitability and productivity of wheat. This approach is a compromise between running everything at a high resolution and formal hierarchical model designs. The sample frame may vary depending on the modelled process. For instance, crop phenology requires fewer sample points than soil water processes (e.g., infiltration, runoff, erosion), while changes in yield are somewhere between these two extremes. The sample frame is likely to reflect soil and agroecological spaces, as noted in creating soil-climate polygons. Denser sample networks are required where model responses are more variable. Statistical methods for sample design and validation are well developed and can measure the errors associated with various sample designs. Great Britain case study The example for Great Britain [27] used soil polygons overlaid on a 10 by 10 km monthly climate grid (Fig. 8.4). To determine the necessary spatial resolution, yields were calculated as averages for each grid and as the yield of the dominant soil in each grid. The effect on yields of using a dominant soil per climate grid rather than all soil polygons in a grid was not significant. The spatial patterns were compared using a polynomial equation to relate the pattern to the x, y grid using generalised least squares. Using a cubic polynomial to describe the spatial pattern of the dominant yield per grid and the average yield per grid showed that the intercept (an x, y coordinate) and other coefficients of the models, relating to the x and y coordinates, were not significantly different at the 5% level. There is, therefore, a very high probability that the spatial patterns were not different. Another methodological issue is how climate data should be downscaled from the monthly to the daily time scale. Downscaling from monthly to daily climate data using the Brooks sine curve interpolations on monthly temperature and radiation data have been successfully tested for Sirius using Great Britain sites (reported above, see Harrison et al. [21]). Other methods of downscaling including use of weather generators were also tested. Daily precipitation was derived in the Great Britain study from monthly totals and mean number of rain days. Mean rain per rainy day was distributed randomly to days in the month. Observed daily site climate data were used with polygon soil data to test the downscaling method. This involved running the model on the 26-year observed precipitation data then averaging the data to give monthly values and using the downscaling approach to derived daily values and rerun the model. The combined effect of down-scaling temperature and radiation (using the sine curve) as well as downscaling
152 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
precipitation as described was also tested using the observed daily data at the six sites. MONTHLY 10km GRIDDED CLIMATE DATA or CURRENT SITE DAILY WEATHER
GCM GRIDDED DAILY DATA
SOIL POLYGONS
Soil type, texture class
MANAGEMENT ELEVATION DATA LAND USE Sowing, irrigation, ADMIN. BOUNDARIES fertilizer
Weather generator Parameterisation
Current site weather generator parameters
Downscaling
Pedo-transfer functions
GCM site weather generator parameters
Weather generator
Gridded time-series daily weather Baseline and GCM Downscale to daily data
AWC, PWP, FC
Unsuitability masks
Matrix of soil-climate combinations/ 10km dominant soil/climate combination
Daily crop model
Water-limited/ potential yield for each soil-climate combination Further unsuitability masks Aggregation to county/country Validation against agricultural statistics
Figure 8.4. Schema showing the approaches for the spatial application of crops models in Great Britain (Source: Butterfield et al. [1]).
The results across the six sites indicate that although the maximum difference in any year may be up to 2.9 t ha1 the mean is always less than 0.6 t ha1 (Table 8.1). As our monthly database is a 1961–90 average, the error due to downscaling the monthly data is equivalent to this mean. The same approach
SCALING IN INTEGRATED ASSESSMENT 153
for downscaling data was applied for the POTATOS model and errors were found to be insignificant.
Table 8.1: Modelled mean, maximum and minimum yields calculated for MAFF’s Government Office Region Survey data for county average collated into regions and adjusted to 1990 levels. Observed 1997 and 1998 survey yields and average of 10 highest farm yields in the survey for 1997. SIRIUS Model results Model mean
Regional max
Regional min
MAFF statistics Average county standard deviation
Average survey yields adjusted to 1990
1997 survey yields
1998 survey yields
Max survey yield 1997
North East
9.99
13.07
7.06
1.27
6.23
6.47
6.81
8.64
Yorkshire and the Humber
10.53
12.76
5.94
1.13
6.09
6.81
6.51
10.29
East Midlands
9.99
12.20
5.65
1.31
5.90
6.73
7.61
10.5
Eastern
10.69
12.57
5.92
1.12
6.11
7.18
7.71
10.28
South East and London
9.97
13.05
5.92
1.34
5.93
6.79
7.14
11.25
South West
10.16
13.61
5.34
1.23
5.78
5.53
8.16
9.37
West Midlands
10.33
12.53
5.74
1.30
5.90
6.53
7.34
12.05
North West and Merseyside
10.01
13.52
5.72
1.50
6.15
3.87
6.59
8.64
For the grapevine model differences between using observed and estimated daily data of temperature and radiation were calculated using data from the five test vineyards to check that large errors would not be introduced into the grapevine model output by using downscaled climatic data. Errors were found to be highly insignificant, with all R2 values greater than 0.95. Variation across regional modelled yields seems conservative when looking at regional averages, but the maximum and minimum figures indicate the wide range being predicted across these large regions. For the purposes of validation the correlation coefficient for modelled yields against observed yields adjusted to 1990 levels for all English and Welsh counties was calculated and found to be close to zero. This occurred because the higher than expected modelled yields in the west and lower than expected modelled yields in the east that are skewing the data. In general the model is giving reasonable estimates of water-limited yield in most areas considering that the model does not take into account the effect of pests, diseases or weeds. As with all
154 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
model estimates there is a difference between ‘potential’ (maximum possible yield) and ‘actual’ yield (those seen in the field). Comparing the 1990 baseline yields with the Sirius model results actual yield can be estimated as 0.6 of the potential yield. Although the range in mean yield across the regions is close in magnitude for the modelled and survey yields (0.7 t ha1 compared with 0.45 t ha1), the range in modelled and survey yields when looking at the county scale is larger (3.3 t ha1 compared with 1.3 t ha1). This is important in terms of our confidence in predicting ‘regional’ yields and also may contribute to improving the application of mechanistic models at these scales. Validation of the results is limited because of the temporally averaged climate data (it was not possible to simulate results for individual years which could be compared with yearly crop statistics). The Ministry of Agriculture, Fisheries and Food (MAFF) survey data are very limited at the county level and only five years of survey data were available for county validation. The high sensitivity of the Sirius model to available soil water causes winter wheat yields to be lower than expected in some areas of East Anglia although when the mean of the Eastern region was calculated it gave the highest mean of all regions. Very high modelled yields in areas close to those marginal to production due to excess winter precipitation and difficulties in working the land also gave cause for concern. These problems could be overcome if some allowance for excess water or impact of high summer rainfall on lodging were included in the model. Aggregation of the results to the regional level gives confidence in this approach (Table 8.1). The range in mean yield over all the English regions (for the 1961–90 mean climatology) is small as is that seen in the MAFF survey yields (average of five years). In recent years yields are also becoming comparable with the model yields and observed farm maximum yields are comparable to model maximums. Such an assessment at the county and regional scales would not be feasible using a traditional site-based approach to crop modelling. The spatial aspects of the approach gives knowledge on possible new areas of expansion for production of specific crops (e.g., grapevine) and shifts in the optimal locations for production of traditional crops in the future. This information is difficult to gain using a site-based approach. The method described limits the model output to long term period means. When all climate variables in the gridded climatology are available as a time series it will be possible to conduct a time-series analysis for comparison of yields year-by-year. Not only will this improve the model validation, it will also allow assessment of the year-to-year variability in yields, which is of particular benefit in the assessment of risk when considering production of novel crops. Spatial interpolation approach Spatial interpolation techniques (e.g., kriging, neural network) can be used to estimate model outputs at unsampled sites. Spatial interpolation is also used
SCALING IN INTEGRATED ASSESSMENT 155
for preparing irregularly scattered data to construct contour maps or contour surfaces. Both create a regular grid of interpolated points. The selected spatial interpolation methods differ in their assumptions, local or global perspective, and deterministic or stochastic nature. Using crop models developed for simulating point responses to predict regional yields has raised the question of whether to spatially interpolate inputs and run the model for every interpolated point or run the model for points where there are inputs and then spatially interpolate model outputs variables. The quality of the results obtained using these two approaches is mainly due to the degree of non-linearity of the model, as well as the spatial structure of the inputs. Unfortunately, the process of extending point model estimates on the land surface is often complex and nearly impossible in very irregular terrain. Different spatial interpolation techniques can be used for eco-climatic classification (e.g., fuzzy classification of remotely sensed imageries, kriging, splining, etc.) [14, 30, 31]. Accordingly, it is possible to hypothesise that these techniques can provide information on the spatial distribution of outputs from crop simulation models. The extent to which these approaches can be used to estimate crop productivity, derived from crop simulation models, in complex terrain is, however, mostly unknown. In particular, there is a lack of methods that use satellite images and ancillary data (elevation, distance from the sea, latitude, etc.) for spatially extending model parameters computed at ground stations (i.e. the duration of phenological stages, yield etc., see Bindi et al. [32]). Following these considerations different sources of information (remotely sensed imageries, morphological and geographical data) have been used to link site crop simulation model outputs to their spatial regional distribution. Specifically, crop model parameters are extended between meteorological sites over a region using three spatial interpolation techniques (fuzzy classification, kriging, neural network). This method assumes that estimated parameters are spatially autocorrelated, and also that they are mainly determined by ecoclimatic factors which also drive global vegetative development. Kriging and neural network approaches are used to find the spatial correlation of model output variables; whilst Normalised Difference Vegetation Index (NDVI) profiles provide information for the eco-climatic factors. Climate data are available from stations evenly distributed over a specified region and environmental data on altitude, latitude and slope are also required. Digital terrain model (DTM) data is required to provide information on the surface conditions. One technique to interpolate over space is to use a spatial production function. Site model results can be used to derive regional statistical predictor functions of the relationships between yield, technology and climate. An emulator can derive the response surface from multiple runs of the site model, using statistical techniques (e.g., Buck et al. [33]). A reduced form model based on the results of the Sirius winter wheat model was developed in
156 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
the Central England study [12]. Specific needs arise when the available data resolution is too coarse for input into models (e.g., daily climate data being derived from monthly means); insufficient data are available at the required resolution or when historic data records are insufficient (weather generators, also used to apply scenario changes). Statistical predictor functions can be derived between sample site results and gridded data. The statistical formula is used to interpolate the model results to the original grid. For example in a study on wheat and soya bean in Spain [34] statistical analyses were used to derive yield response functions from the results of temperature, precipitation and CO2 sensitivity runs conducted at seven sites which were chosen to represent the agro-climatic regions of Spain. Agricultural response functions were then developed from these sitespecific results and applied to monthly climatic input data on a 10 by 10 km grid across Spain for current conditions and several climate change scenarios. The advantages of the spatial interpolation approach are that spatial relationships are taken into account as well as eco-climatic factors. The disadvantages are the requirement of satellite NDVI data and DTM data. Advanced processing methods are also required. Tuscany, Italy case study Most studies on the impacts of climate change on natural vegetation and agricultural crops have been undertaken at two spatial scales. First, experiments on the effects of elevated atmospheric CO2 concentration and higher temperatures on plant growth and yield have been performed at the plant level using different system facilities (e.g., growth chambers, open top chambers, plastic tunnels, FACE) (see Van de Geijn et al. [35]). Considerable work has also been done at this site level using growth simulation models. Models use the knowledge gained from experimental studies to systematically investigate the effects of future climate predictions from global climate models (GCMs) on crops, including appropriate response strategies. Second, a few studies have been conducted on the impact of climate change at the regional scale by linking crop simulation models to spatial interpolation techniques. Thus, in order to investigate the effects of climate change and climatic variability on crop productivity at the regional scale a method is needed to estimate crop model parameters in the area between ground-based meteorological stations. To address these issues a methodology has been developed and tested for extending the output from a grapevine model over a spatially complex region in central Italy (Tuscany) [32]. The method has been used to evaluate the regional response of grapevines to climate change. The grapevine model was calibrated using field and climatic data for three experimental stations located in Tuscany. It was then applied at 67 sites for both current and future climatic conditions. Present and future climate datasets for 31 years were produced using the LARS stochastic weather generator
SCALING IN INTEGRATED ASSESSMENT 157
[36] calibrated on observed historical weather data and output from GCMs, respectively. Different methods for spatialising the site model output variables were evaluated and used to investigate the effects of climate change on viticultural production at the regional scale. Three methods were tested for spatialising the means and coefficients of variation (CV) of model output variables computed for 31 years at the 67 sites. Specifically, a fuzzy classification approach was utilised for processing the remotely sensed data and a kriging approach was used to explore the spatial variation of model parameters and their extension over the land surface. A method for statistically combining the results of the fuzzy classification and kriging procedures was also investigated in order to optimise the estimation process. The third method examined the possibility of using neural networks for spatialising the means and coefficients of variation (CV) of output variables from the grapevine model. Errors associated with the different methods have been quantified and indicate that all three methods provide satisfactory and similar estimates of mean model output variables. Alternatively, only the neural network approach was able to accurately estimate the CV of model variables. The regional grapevine model was validated in two steps. Firstly, by comparison with model outputs from the original site-based simulation study (see Bindi et al. [32] for details) and, secondly, by comparison with observed data obtained from the Agrometeorological Service of Tuscany, Institutes of Agricultural Ministry and private consortia of the most important viticultural areas of Tuscany. The date of physiological maturity is predicted to occur from the middle of August in the valleys to the middle of October on the upper hills of Tuscany. These estimates are in reasonable agreement with the observed data, although the estimates show a lower spatial variability (Fig. 8.5). The model is also able to reproduce the correct spatial pattern of yield and acid and sugar concentration, although simulated values tend to be slightly lower and have a lower spatial variability. This lower spatial variability of the model outputs is due essentially to the methodologies used to extend the site model output parameters (i.e. neural networks) and to generate the synthetic weather data (LARS-WG). Both these methodologies tend to smooth extreme values resulting in lower variability. Two climate change scenarios were used from the Hadley Centre’s HadCM2 GCM. These were mean climatic changes from the greenhouse gas only experiment (HCGG) and the greenhouse gas and sulphate aerosol experiment (HCGS). The model was run using a CO2 concentration of 353 ppmv for the baseline and 515 ppmv for the climate change scenarios. Model parameters were adjusted to account for the direct effects of elevated CO2, using results from free air carbon dioxide enrichment (FACE) experiments. Results are
158 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
available for mean and variance of phenology, yield and quality characteristics of the grapevine.
Figure 8.5: Comparison between simulated model output variables (Sim) and the observed statistical data (Obs) for the three major viticultural landscapes in Tuscany: (a) date of maturity, (b) fruit dry matter, (c) acid content, and (d) sugar content (Source: Bindi et al. [32]).
Stochastic space approach In this approach the variability over geographic space is estimated using remote sensing (see Delécolle [37]). This method is achieved by estimating the current variability of crop conditions over a region by remote sensing. A stochastic parameterisation of a crop model is then derived. The advantages of this method are the possibility of quick estimation of regional field-by-field variability of crop conditions. The method assumes no explicit soil variability, a single sowing date and equal climate across the region. Daily weather data are required at the regional scale. The advantages are that a quick estimation of regional field-by-field variability of crop conditions is possible. This source of variability, which can be considered as a buffer to climate change, is not possible to identify with any other method. The disadvantage is that high resolution infra red remote sensing data is expensive and processing is time consuming. Paris Basin, France case study Mechanistic crop models are, in general, adapted to the field scale. There are numerous examples of applications of crop models at this scale. In this study, a field is considered to be a homogeneous entity (without spatial variability) and is described by the type of crop planted (species, variety) and a collection of practices and events (sowing date and density, dates and amounts of fertiliser applications, etc.). A region can be represented as a mosaic of individual fields. Hence, mechanistic crop models can be used to simulate regional production
SCALING IN INTEGRATED ASSESSMENT 159
by running a model for each field within a region. However, this assumes that the information required to calibrate the model is available for every field, which is generally not the case. Thus, regional studies often assume that a region is a large field and define equivalent regional crop state variables and parameters before directly applying a site crop model. Such equivalent values cannot be measured, only calibrated, and are generally meaningless. An alternative method, adopted in the Paris Basin study [37], is to introduce the spatial variability of crop conditions within a region as distributions of related crop parameters into the site process model (Fig. 8.6). Scaling-up from the site to the regional scale therefore involves estimating joint probability laws (including correlations) for all model parameters. Distributions of crop state variables or final production is then established by generating sets of parameters through a Monte-Carlo scheme [38], and running the model for each of these parameter sets and regional values of the input (climate) variables. The shape of the distribution of model outputs indicates the variability and stability of yields within the region. It is thus possible to determine whether the spatial diversity within a region represents a source of resilience to changing climatic conditions or whether it is likely to amplify the impacts of climate change. The principle of this method is that regional models can be stochastic versions of standard site crop models. Stochasticity is provided by treating some of the model parameters as random variables rather than fixed values. Each field planted with a given crop species in a region is associated with a single set of conditions (genotype, management, soil) which can be translated into values of related parameters in a crop model. Calibrating the model for all fields therefore provides a collection of values for these parameters, which can be used to construct empirical distributions. Such a detailed model calibration is made possible by using scenes of the region provided by high resolution satellites, giving access to field-by-field information at certain times over the whole region. The impacts of climate change are simulated at the regional scale by taking P values of model parameters at random from the estimated empirical distributions (representing spatial variability). Each model configuration is then fed with N years of climate data generated from a scenario of climate change (representing temporal variability). The result is a distribution of N × P yield values, which illustrate the yield response of the region to the change in climate, assuming management conditions remain the same. Four steps were undertaken to develop the method: ■ ■ ■ ■
Automatic segmentation of the region into individual fields. Selection of an appropriate site crop model for winter wheat. Construction of the regional model, calibrated on present conditions. Application of the regional model to climate change scenarios.
160 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
Observed field data (AGDM), f fields
Remotely sensed field data (LAI,APAR) F fields, F>>f
P core and input parameters
Model field calibration
Average field parameters
Individual field parameters
(P values)
(P x f values)
Individual field parameters ( P x F values)
P parameter distributions
‘Average’ regional model
‘Distributed’ regional model
‘Stochastic’ regional model
Regional scenario
Expected SVs and outputs ( yield, phenology)
Time distribution and space variability
Time distribution
Average effects
Distributed effects
‘Stochastic’ effects
Adaptive strategies
Figure 8.6: Flow chart showing the three types of calibration undertaken for STICS-Wheat and their application (Source: Delécolle [37]).
The average and distributed model calibrations produce comparable results for the Paris Basin region under both current and future climatic conditions (Fig. 8.7). This means that, in general, results are similar if the model parameters are averaged or if the model output from the ten individual fields is averaged
SCALING IN INTEGRATED ASSESSMENT 161
OBSERVED
OBSERVED CLIMATE 14
80
12 60
10 8
40
6 4
20
2 0 2.9
3.9
4.9
5.9
6.9
7.9
8.9
9.9
0 2.9
3.9
-1
Yields (t ha )
4.9
5.9
6.9
7.9
8.9
9.9
Yields (t ha-1)
Generated Climate
Generated Climate 80
14
70
12
60
10
50
8
40
6
30 20
4
10
2
0 2.9 3.9
4.9
5.9 6.9
7.9
Yields (t ha-1)
8.9 9.9
0 2.9
3.9 4.9 5.9 6.9 7.9
8.9 9.9
Yields (t ha-1)
Figure 8.7: Simulated distributions of yield for the observed climate first row) and the generated baseline climate (second row), using the distributed calibration method (first column) and the average calibration method (second column) (Source: Delécolle [37]).
(implying some linearity in processes). The first solution of averaging model parameters is much less expensive in terms of run time. A different conclusion may have been reached if other model parameters had been selected for calibration. Data from ten surveyed fields were available to produce the distributed calibration. A different number of sample fields may also cause different results, as might a denser time profile of observed dry matter. Different results are produced by the average and distributed calibrations for extreme yield values. However, the probability of yields being greater than 8 t ha1 under the observed climate is 5% for the distributed calibration and 0% for the average calibration. For the generated baseline climate, the respective scores are 9 and 5%. This must be put together with the changes in yield distribution according to the climatic input (as expected from Semenov and Porter [39]). As no real change in distribution tails is induced by the scenario-generated climates, the distributed calibration allows more indepth analysis of extreme values. The stochastic calibration method produces different values for model parameters than the average and distributed calibration methods for all climatic datasets (i.e. observed, generated baseline and climate change scenarios). This is because different crop state variables are used in the calibration procedures, satellite-estimated LAIs for the stochastic calibration and measured above-
162 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
ground dry matter for the average and distributed calibrations. The number of time replications available for each variable also differs between the calibration methods. This method could be used in alternative applications to climate change impact assessment, such as simulating the time evolution of agricultural landscapes. Its tractability nevertheless relies on the frequent availability of high-resolution satellite images during the crop-growing season. When the frequency of images is insufficient they must be supplemented by low resolution more frequent scenes, but transfers between different resolutions are still uncertain.
Downscaling Downscaling involves the translation of coarse resolution model outputs to finer resolutions corresponding to real space and time. The business of downscaling blossomed with the use of General Circulation Model (GCM) scenarios to chart the impacts of climate change. Thus it is usually with respect to GCM output that downscaling is applied; however, the term can apply equally to other attempts to re-express large-scale information in a form more relevant at the small (temporal and/or spatial) scale. The first generation of GCMs had resolutions on the order of 5 degrees latitude and longitude, clearly too broad-scale to instil much confidence in local or even regional changes. While newer models have higher resolutions, approaching 2 degrees or 200 km, these scales are still orders of magnitude larger than the typical impact unit and issues of downscaling are still relevant. This section provides a brief review of the main methods. Overview of methods Downscaling techniques can be classified into three types according to complexity and computational demands (other typologies are possible – see for example Wilby and Wigley [40], Xu [41]): simple downscaling, statistical downscaling (and sub-types) and dynamical downscaling. These categories are summarised in Table 8.2. Simple downscaling The crudest approach to downscaling is to apply the large-scale GCM climate change outputs to observed climate at the scale of interest. In a typical approach, GCM data for some period in the future are first expressed as “change fields” relative to the GCM climatology over a period in the recent past (e.g., 1961–1990). This removes many biases in the GCM climatology from the scenario. The GCM change fields are then used to adjust an observed “baseline” climate dataset, representative of the same climatological period (e.g., 1961– 1990), usually in a simple additive manner. At a single site, the baseline is
SCALING IN INTEGRATED ASSESSMENT 163
likely to be a meteorological record. Where spatial scenarios are required, GCM change fields are usually added to a gridded data set of surface climate variables. GCM data are either used as is, or can be interpolated to appropriate grid resolution (or the location of interest) and then applied. To date, simple downscaling has primarily been applied using changes in meant monthly or seasonal climate, but changes in variance of monthly climate can also be incorporated (e.g., Hulme and Jenkins [42]). Similarly, with more daily data from GCMs becoming available, changes in the variance of daily climate can also be incorporated. Simple downscaling has several advantages, the main one being that it is quick and easy, enabling rapid comparison of data from more than one GCM simulation – either GCMs from different modelling centres and/or ensemble members from the same modelling centre [43]. There are several potential disadvantages to simple downscaling. Data that are readily available are typically only at monthly resolution, therefore provide no information about changes in the structure of daily climate, especially rainfall; even if daily data are available from GCM simulations, their probability distribution functions frequently bear little relation to the real world, raising the question of whether the method is appropriate at daily resolutions. The approach clearly cannot capture sub-GCM grid-scale changes in meteorology. This is particularly so with rainfall, where sub-grid precipitation and cloud processes are a major source of error and/or uncertainty. Dynamical downscaling Dynamical downscaling attempts to overcome some of the spatial limitations of simple downscaling by explicitly modelling the climate at higher resolution than standard GCMs. This permits the inclusion of realistic topography and land-sea configurations, and in some cases, improved dynamical processes. Currently, these approaches have a maximum spatial resolution of 20–50 km, so there remains a scale mismatch where local-scale climate information is required. These scale mismatches will reduce with time, as the spatial scale of regional models continue to improve with increases in computing power. There are two main approaches to dynamical downscaling: GCM timeslice or variable resolution experiments and nested regional modelling. Time slice/variable resolution This approach makes use of a high resolution GCM (>T100 or 1–0.5° lat/lon) or a variable resolution GCM (one that has a “standard” resolution over most of globe, but fine resolution over the region of interest) to provide high resolution output at a future “time slice” [44, 45, 46]. These simulations are typically forced with coarse resolution GCM SST and sea ice fields, as well as GHG forcings.
164 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
Results to date are equivocal: patterns of regional changes can be more dependent on the AGCM than the SST forcings used. The approach does produce improvements in the large-scale meteorology, but in some cases the biases in course resolution GCMs are overcorrected, producing biases of opposite sign. This is because (in part at least) the model physics and parameterisations are scale dependent (and many course resolution GCMs are non-hydrostatic, so do not work as well at higher resolutions where topography requires hydrostatic physics). Reliance on single model is also potentially problematic as results would then be highly model-dependent (as with standard GCMs). Regional Modelling In these approaches, a higher resolution atmospheric climate model (RCM), usually with scale-appropriate physics, is forced over a limited domain (e.g., Europe – [7, 47, 48, 49]) by the surface and lateral boundary conditions derived from a GCM. The choice of the regional domain is important. If the domain size is too small “edge effects” from the adjacent boundary conditions can affect the region of interest. However, if too large a domain is used the regional climate can become decoupled and produce meteorology that is independent of the global forcings. Domain boundaries should also be defined so that they do not coincide with regions of steep or complex topography. Recent developments include the coupling of RCMs to other models of climate system components, most importantly land, biosphere and/or hydrology models, and also multiple nesting (RCMs within RCMs). There is good evidence that regional models provide added value, particularly for precipitation, which remains a key driver of many impact systems. Nonetheless, RCMs remain reliant on good quality GCM forcing fields. To date RCM experiments have been very goal-specific, and the ideal of multiple RCMs forced by multiple GCMs (ensembles) to obtain information about the spread of predictions (uncertainty) has not yet been achieved. Despite the improved spatial resolution of RCMs, they still fail to deliver sitespecific information. In such cases, some form of additional downscaling is required. This can be achieved by simple downscaling (described above), or using statistical methods (see below). In both cases the coarser scale information derived from a RCM is likely to be superior to that derived from a GCM. A key advantage of RCMs that their ability to simulate multi-site (albeit on a model grid) climate where the spatial covariance of the climate is preserved. This remains a methodologically difficult task for the statistical methods described below.
SCALING IN INTEGRATED ASSESSMENT 165
Statistical downscaling Statistical downscaling techniques aim to obtain “added value” over and above the grid-scale surface climate information provided by GCMs. The underlying rationale is that although GCMs reproduce the larger-scale atmospheric circulation reasonably well, grid point realisations of surface climate are less well simulated, and are fundamentally limited because of the resolution limitations of GCMs. For example, GCMs are unable to resolve local topographic controls on climate, small-scale land-sea interactions, mesoscale landsurface forcings and convectional precipitation processes. Three main types of statistical downscaling are regression, weather typing and stochastic methods. Although convenient, this tripartite division masks the fact that many techniques are combinations of two or more of these end members. Regression and weather typing At the heart of regression and weather typing techniques lies the belief that local climate processes which are not resolved at the GCM grid scale, are nonetheless dependent on larger scale atmospheric and surface climate output from GCMs. Regression methods make use of linear (e.g., multiple regression) or nonlinear (e.g., artificial neural networks) statistical relationships between largescale GCM variables and/or derived fields and the local climate data – either station data or high resolution gridded products [40, 41, 50, 51, 52, 53, 54, 55, 56]. The models are usually trained/calibrated on GCM model data, usually some combination of temperature, upper and lower level pressure fields, wind and atmospheric moisture content. They are then run in a similar manner to RCMs, in that they are forced with GCM predictors for the future and changes in the surface climate variables of interest are determined. Weather-typing (or analogue downscaling) relates particular modes of mesocscale (synoptic) weather features to the observed surface climate. These modes can either be defined empirically (e.g., Conway and Jones [57]) or statistically, for example through principal component analysis [58]. The defined weather types are then related to the observed surface climate, usually through some linear or non-linear regression process similar to those described above. Weather generators Weather generators (WGs) are statistical models of observed sequences of weather variables (see Wilks and Wilby [59] for a recent review). Most of these simulate daily weather phenomena, usually with “secondary” climate variables predicted as a function of precipitation occurrence and amount. The weather generator model is calibrated against observed station data, either a single site or multiple sites. In the latter case, the model must be extended to incorporate the spatial covariance structure of precipitation [60]. Once the WG has been conditioned, long runs of synthetic climate with the same statistical structure to the observed climate can be generated.
166 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
To use in a “climate change” mode, the parameters of the WG model must be perturbed in an appropriate manner, as some function of the GCM output; these may be conditioned on some large-scale atmospheric state, or on the grid point output of the model (e.g., if GCM precipitation rainday frequency changes, then alter the probability of rainday occurrence in the WG proportionally). Clearly, the trick is to be able to perturb the WG parameters in a meaningful manner. Many WG models underestimate the variance of daily weather variables, especially rainfall. This need to be corrected for using, for example, inflation [56] or by adding white noise, possibly conditioned on synoptic state. Some problems in statistical downscaling A rarely addressed shortcoming of statistical downscaling is that the downscaling model is calibrated and run in predictive mode using different GCM output. Most approaches calibrate the downscaling model against re-analysis data – this is necessary to relate “observed” large scale parameters to observed station data – but then use GCM to simulate climate change. The GCM and re-analysis typically have different resolutions and different climatologies (e.g., low pressure zones occurring at different latitudes, or with differing intensity). Thus the conditioned model may not operate appropriately when forced with GCM data. A further problem is that, in some instances, different variables mean different things in different climate models: using (nominally) the same predictor variables from reanalysis and GCMs may not be strictly valid and may make the outcomes unstable when used in predictive mode. Choice of predictor variables is important. In the case of rainfall, some measure of atmospheric humidity is critical (Wilby, personal communication), but these are often not very well simulated in re-analysis and very few evaluations of this variable in GCMs have been undertaken. Statistical downscaling will only be able to predict climate change as a function of the GCM predictor variables. If other processes lead to changes in local climate, then these changes will not be reflected in the downscaled climate. For example, Schubert [61] showed that changes in temperature extremes over Australia were forced by radiative properties of the atmosphere and not circulation changes, and could therefore not be predicted by his statistical downscaling methodology. Advantages Statistical methods remain the only way to generate site-specific climate data, and in many instances have been demonstrated to provide “added value” to simple downscaling. They are computationally cheap, relative to dynamical methods, and are eminently suitable for multiple simulations (using multiple integrations with the same GCM, or multiple GCM runs, or both). Finally, they are appropriate for a “bottom-up” approach to impact assessment, where the local-scale climate variables important for an impact study can be identified at the outset, and included in the downscaling model.
SCALING IN INTEGRATED ASSESSMENT 167
Table 8.2: Approaches to downscaling Approach SIMPLE
Statistical
Circulation Indexing
Weather generator
Dynamical (regional model; highresolution GCM)
Description Apply the GCM grid box change fields to the local climate time series ✔ Easy to do for numerous models ✔ Provides first-order indications ✗ Fails to capture local effects ✗ Difficult to provide information on extremes Relate large-scale predictors in GCM to parameters of interest in local time series. ✔ Gives site-specific information ✔ Non-trivial model development ✔ Potential for multiple GCM downscaling ✗ Requires accurate local data ✗ Mismatches between current and future predictors Reconstruct circulation indices (or weather types) and relate local conditions to changes in frequency of indices ✔ Has meteorological basis ✗ Identification of indices can be difficult ✗ Problems in areas where convection is prominent Force a numerical weather generator with changes derived from GCMs ✔ Weather generators are relatively common ✔ Flexibility in constructing time series – can have multiple series. ✗ Difficult to scale up to regional changes Use a high resolution model with boundary conditions forced from GCM ✔ Matches resolution of weather forecast models. ✔ Provides large range of weather parameters. ✔ Preserves spatial covariance of weather. ✗ Requires advanced computing. ✗ Relatively few scenarios available. ✗ Unable to provide site-specific data.
Examples [42, 62, 63, 64, 65] [51, 61, 66, 67, 68]
[57, 69, 70, 71]
[60, 72, 73, 74]
[44, 45, 49, 75, 76, 77, 78, 79, 80]
Issues in Scaling Methods Methodologies for up and down scaling vary considerably in their requirements for data and technical expertise, potential for validation, and their contribution to the quality of the overall research effort. Issues of stakeholder participation in (and representation in) integrated assessment are also relevant. Input data A reduced-form modelling approach may not be appropriate if a study requires very detailed information, which is only provided by complex sitebased models. In such cases, the fundamental problem is how to relate the detailed model to geographic regions. Technical expertise Expertise already present at an institute will largely determine the amount of time required to develop a methodology. Using datasets from earlier projects or by other accessible groups may reduce the financial and time costs.
168 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
Validation Models may be validated in many different ways, reflecting the different scales involved in crop modelling. At the process level model results may be compared with the results from controlled experiments where only a few external conditions have been manipulated, such as soil water content [81]. Crop simulation models differ in their description of physiological processes, but are in general able to describe reasonably well the response of crop production (especially yield) to changes in temperature and precipitation. At the site level models may be compared with observed yields, if the models include management data (e.g., Landau et al. [82]). Comparison of observed aggregated regional and national yields with simulated yields, adds extra uncertainties to the validation process. The scaling-up method may itself be a source of error. This includes uncertainties in the model inputs, both regarding soils and climate data, but also regarding management data. Often data on average or “normal” management have to be used for simulating regional yields. Results of comparing simulated county and national yields with observed ones have shown that the model can explain 20 to 30% of the interannual variability in observed yields in Denmark [13]. Similar results have been found for the application of the Sirius model for the Brandenburg region in Germany (Jamieson, personal communication, 1999) and for the Canterbury region in New Zealand (Olesen, personal communication, 1999). In contrast the crop models have been shown to explain a much larger part of the variation in yields in Finland [19]. These results suggest that at the margin of a crop’s growing area (such as Finland and probably some Mediterranean countries for wheat) there will be a good correspondence between simulated and observed yields, because it is the main climatic factors that constrains yields. In the core of the wheat growing area in Europe, wheat yields are generally not directly determined by climate, but by management, some of which may also be climate related, but not currently described in our models. In the validation of scaling-up methodologies, it is important to consider the variability of simulated and measured yields in both time and space and the possible interaction between time and space. There are a number of options available to perform this validation. Uncertainty and risk Given the large range of uncertainties in climate change [83, 84, 85, 86, 87, 88], what should have priority? Up- and down-scaling introduce additional uncertainties into climate change impact assessment, and integrated assessment. Is the additional effort justified? Do the benefits override the required effort and uncertainty? There appears to
SCALING IN INTEGRATED ASSESSMENT 169
be little guidance for these strategic questions. In most cases, the sophistication of multi-scale methodologies is driven by disciplinary research teams rather than a priori consideration of what users need. Handling extremes and extreme events is perhaps the most difficult, and yet most important issue. How can useful information, especially on changes in joint probabilities of phenomenon – e.g., dry spells, consecutive hot summers, dry spells and increased wind – be extracted from global climate models? The recent ECLAT workshop [89], concluded that downscaling of extremes had not been addressed to any great extent, but probably represented an order of magnitude increase in methodological difficulty, most especially because the large scale forcing is less well resolved at the GCM scale. From a ‘bottom up’ perspective, sensitivity to extreme events is often poorly captured in impact models. In such cases, one might wish to map ‘impacts scenarios’ before investing scarce resources in defining extreme event scenarios that are not likely to be robust. Stakeholder participation Really useful modelling requires stakeholder participation. Given the common constraints of time, how can stakeholders understand the complex issues of scale? Some will be overly convinced that high resolution maps equate with robust predictions. Others will look at the list of caveats and conclude that the uncertainties overwhelm the insight. Modellers themselves tend to anchor their expertise on their own models – but have difficulty in translating their insight for new users. We suggest that an agent-based approach may provide stakeholders with easier access to model participation and interpretation. Scaling agents The translation of climate change research from impacts (“if climate change, what are the impacts”) to adaptation (“how can we cope with climate change?”) requires novel methodologies and techniques (see Downing et al. [5]). One promising approach is the use of software agents to represent decision makers (variously called actors or stakeholders). The idea is to capture the cognitive processes, decision algorithms and layers to decision making in software agents. The paradigm has its roots in computer science, where software robots search the web looking for particular kinds of information for example. A community of decision theorists, sociologists and psychologists has extended the approach, in what may be termed agent-based social simulation (see AgentLink as an example of efforts underway: www.AgentLink.org). An important feature of such models is that they capture representations of changing social relations. These relations encompass the social embedding of an individual and the complexity of an individual’s exchange with the
170 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
environment. Such changing relations include institutional changes in exchange, changing organisational structures, the development of new mental models by agents and how these affect policy assessments. A key issue is the scale of the agents. Are they individuals? Aggregations of individuals? Communities, for example, defined by geography or class? Corporate actors with recognised structures? Or do we need models of agent cognition – the psychology of decision making (in which the agents might be the ego and id, as one example)?
Conclusions No easy conclusions can be drawn from this comparison of up and down scaling methodologies. Data, expertise, time and financial resources are limited, and may drive the choice of method more strongly than the technical merits of different schemes. Based on the model testing in the CLIVARA project, however, it is possible to make some general statements. The simplest, site-driven approach may be justified in some homogeneous environments (Denmark) for some parameters (mean and variability of yield). For spatial grids a subset of the complete grid is adequate for many purposes. A baseline can be estimated on the whole grid and a series of sensitivity tests run on a stratified sample. Conversely, in complex terrain, a regional approach can expose key nonlinearities that may not be apparent from site-based analyses. Including remote sensing techniques in impact studies allows full representation of landscape dynamics while not necessarily making the analysis overly complex. Similarly, sophisticated methods of downscaling climate change scenarios may be warranted where local variability in the terrain or in impacts are of great concern. However, where the coarse grain scenario varies significantly (e.g., different GCMs report significant increases or decreases in precipitation), simple downscaling methods may be sufficient to capture a sense of the risks.
References 1.
2.
Butterfield, R. E., M. Bindi, R. J. Brookes, T. R. Carter, R. Delécolle, T. E. Downing, Z. Harnos, P. A. Harrison, A. Iglesias, J. E. Olesen, J. L. Orr, M. A. Semenov, and J. Wolf, 2000. Review and comparison of scaling-up methods. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 393–414. Courtois, P-J., 1985. “On time and space decomposition of complex structures.” Communications of the ACM, 28: 590–603.
SCALING IN INTEGRATED ASSESSMENT 171
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
Downing, T. E., R. E. Butterfield, P. A. Harrison, and J. L. Orr, 1998. Toward robust methodologies for spatial assessment of agroecological potential: From site scale to integrated assessments. In: A. Ghazi, G. Maracchi and D. Peter (eds.). Proceedings of the European School of Climatology and Natural Hazard Course on Climate Change Impacts on Agriculture and Forests. Publication EUR 18175 EN, Office for Official Publications of the European Communities: Luxembourg: 281–316. Harrison, P. A., 1999. Climate change and wheat production: Spatial Modelling of Impacts in Europe. D. Phil Thesis, Environmental Change Institute, University of Oxford: 287 pp. Downing, T. E., S. Moss, and C. Pahl-Wostl, 2001. Understanding Climate Policy Using Participatory Agent-based Social Simulation. In: S. Moss and P. Davidson (eds.). Multi-Agent Based Simulation. Berlin: Springer Verlag: 198–213. Easterling, W., A. Weiss, C. Hayes and L. Mearnes, 1998. “Optimum spatial scales of climate information for simulating the effects of climate change on agrosystem productivity: The case of the U.S. Great Plains.” Journal of Agricultural and Forest Meteorology, 90: 9051–9063. Mearns, L. O., I. Bogardi, F. Giorgi, I. Matyasovszky, and M. Palecki, 1999. “Comparison of climate change scenarios generated from regional climate model experiments and statistical downscaling.” Journal of Geophysical Research, 104: 6603–6621. Mearns, L. O., W. Easterling and C. Hays, 2001. “Comparison of agricultural impacts of climate change calculated from high and low resolution climate change model scenarios, Part I: The uncertainty due to spatial scale.” Climatic Change, 51: 131–172. Downing, T. E., P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.), 2000. Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Research Report No. 21. Environmental Change Institute, University of Oxford. Brooks, R. J., and A. M. Tobias, 1996. “Choosing the best model: Level of detail, complexity and model performance.” Mathematical and Computer Modelling, 24: 1–14. Polsky, C., and W. E. Easterling, 2001. “Adaptation to climate variability and change in the US Great Plains: A multi-scale analysis of Ricardian climate sensitivities.” Agriculture, Ecosystems and Environment, 85: 133–144. Brooks, R. J and M. A. Semenov, 2000. Modelling climate change impacts on wheat in Central England. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 157–178. Olesen, J. E., T. Jensen and P. K. Bøcher, 2000. Modelling climate change impacts on wheat and potato in Denmark. In: T. E. Downing,
172 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
14.
15.
16.
17.
18.
19.
20.
21.
P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 313–332. Davies, A., T. Jenkins, A. Pike, J. Shao, I. Carson, C. J. Pollock and M. L. Parry, 1996. Modelling the predicted geographic and economic response of UK cropping systems to climate change scenarios: the case of potatoes. In: R. J. F. Williams, R. Harrison, T. J. Hocking, H. G. Smith and T. H. Thomas (eds.). Implications of “Global Environmental Change” for crops in Europe, Churchill College, Cambridge, The Association of Applied Biologists: 63–70. Harnos, Zs., A. Bussay and N. Harnos, 2000. Modelling climate change impacts on wheat and potato in Hungary. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 349–365. Calixte, J. P., and J. W. Jones, 1993. AEGIS Developers Guide, IFAS, Florida. FAO, 1986. Early agrometeorological crop yield assessment, Plant and Protection Paper 73, Food and Agricultural Organisation of the United Nations, Rome: 150pp. Semenov, M. A., and R. J. Brooks, 1999. “Spatial interpolation of the LARS-WG stochastic weather generator in Great Britain.” Climate Research, 11: 137–148. Barrow, E. M., M. Hulme, M. A. Semenov and R. J. Brooks, 2000. Climate change scenarios. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 11–30. Carter, T. R., R. A. Saarikko and S. K. H. Joukainen, 2000. Modelling climate change impacts on wheat and potato in Finland. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 289–312. Rounsevell, M. D. A., P. J. Loveland, T. R. Mayr, A. C. Armstrong, D. de la Rosa, J-P. Legros, C. Simota and H. Sobczuk, 1996. “ACCESS: a spatially-distributed, soil water and crop development model for climate change research.” Aspects of Applied Biology, 45: 85–91. Harrison, P. A., R. E. Butterfield and J. L. Orr, 2000. Modelling climate change impacts on wheat, potato, and grapevine in Europe. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 367–390.
SCALING IN INTEGRATED ASSESSMENT 173
22. Brisson, N., D. King, B. Nicoullaud, F. Ruget, D. Ripoche and R. Darthout, 1992. “A crop model for land suitability evaluation: a case study of the maize crop in France.” European Journal of Agronomy, 1: 163–175. 23. Brignall, A. P., and M. D. A. Rounsevell, 1995. “Land evaluation modelling to assess the effects of climate change on winter wheat potential in England and Wales.” Journal of Agricultural Science, Cambridge, 124: 159–172. 24. Wolf, J., 2000a. Modelling climate change impacts on potato in central England. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 217–238. 25. Wolf, J., 2000b. Modelling climate change impacts on soya bean in south-west Spain. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 239–263. 26. Van Lanen, H. A. J., C. A. van Diepen, G. J. Reinds, G. H. J. de Koning, J. D. Bulens and A. K. Bregt, 1992. “Physical land evaluation methods and GIS to explore the crop growth potential and its effects within the European Communities.” Agricultural Systems, 39: 307–328. 27. Butterfield, R. E., P. A. Harrison, J. L. Orr, M. J. Gawith and K. G. Lonsdale, 2000. Modelling climate change impacts on wheat, potato and grapevine in Great Britain. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 265–288. 28. Brklacich, M., P. Curran and D. Brunt, 1996. “The application of agricultural land rating and crop models to CO2 and climate change issues in northern regions: the Mackenzie Basin case study.” Agricultural and Food Science in Finland, 5: 351–365. 29. Papajorgji, P., J. W. Jones, J. P. Calixte and G. Hoogenboom, 1993. AEGIS-2: a generic geographic decision support system for policy making in agriculture. Proceedings of the Conference on Integrated Resource Management and Landscape Modifications for Environmental Protection, American Society of Agricultural Engineering, St. Joseph, Michigan, USA. 30. Benedetti, R., P. Rossini and R. Taddei, 1994. “Vegetation classification in the middle Mediterranean area by satellite data.” International Journal of Remote Sensing, 15: 583–596. 31. Maselli, F., L. Petkov, G. Maracchi and C. Conese, 1996. “Ecoclimatic classification of Tuscany through NOAA-AVHRR data.” International Journal of Remote Sensing, 17: 2369–2384.
174 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
32. Bindi, M., L. Fibbi, F. Maselli and F. Migletta, 2000. Modelling climate change impacts on grapevine In Tuscany. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 119–216. 33. Buck, R., T. E. Downing, D. Favis Mortlock and R. E. Butterfield, 2000. Parsimonious Statistical Emulation of a Site Crop Simulation Model: An Application for Climate Change Impact Assessment. Climate research. 34. Iglesias, A., and D. Pereira, 2000. Modelling climate change impacts on wheat and soya bean in Spain. In: T. E. Downing, P. A. Harrison, R. E. Butterfield and K. G. Lonsdale (eds.). Climate Change, Climatic Variability and Agriculture in Europe: An Integrated Assessment. Environmental Change Institute, University of Oxford: 333–348. 35. Van de Geijn, S.C., H. C. M. Schapendonk and P. Dijkstra, 1998. Experimental research facilities for the assessment of climate change impacts on managed and natural ecosystems. In: D. Peter, G. Maracchi and A. Ghazi (eds.). Proceedings of the European School of Climatology and Natural Hazard Course on Climate Change Impacts on Agriculture and Forests. Publication EUR 18175 EN, Office for Official Publications of the European Communities, Luxembourg: 117–136. 36. Racsko, P., L. Szeidl and M. Semenov, 1991. A serial approach to local stochastic weather models. Ecological Modelling, 57: 27–41. 37. Delécolle, R., 2000. Modelling climate change impacts on winter wheat in the Paris Basin. In: T. E. Downing, P. A. Harrison, R. E. Butterfield, and K. G. Lonsdale. (eds.). Climate Change, Climatic Variability and Agriculture in Europe. Oxford: ECI. 38. Bouman, B. A. M., 1994. “A framework to deal with uncertainty in soil and management parameters in crop yield simulation: A case study for rice.” Agricultural Systems, 46: 1–17. 39. Semenov, M. A., and J. R. Porter, 1995. “Non-linearity in climate change impact assessments.” Journal of Biogeography, 22: 597–600. 40. Wilby, R. L., and T. M. L. Wigley, 1997. “Downscaling general circulation model output: a review of methods and limitations.” Progress in Physical Geography, 21: 530–548. 41. Xu, C. Y., 1999. “From GCMs to river flow: a review of downscaling methods and hydrologic modelling approaches.” Progress in Physical Geography, 23: 229–249. 42. Hulme, M., and G. J. Jenkins, 1998. Climate Change Scenarios for the UK: Scientific Report. Norwich: Climatic Research Unit. 43. New, M. G., and M. Hulme, 2000. “Representing uncertainty in climate change scenarios: a Monte-Carlo approach.” Integrated Assessment, 1:203–213.
SCALING IN INTEGRATED ASSESSMENT 175
44. Timbal, B., J. F. Mahfouf, J. F. Royer, U. Cubasch and J. M. Murphy, 1997. “Comparison between doubled CO2 time-slice and coupled experiments.” J. Climate, 10: 1463–1469. 45. Deque, M., P. Marquet and R. G. Jones, 1998. “Simulation of climate change over Europe using a global variable resolution general circulation model.” Climate Dynamics, 14: 173–189. 46. Jones, R. N., 1999. The Response of European Climate to Increased Greenhouse Gases Simulated by Standard and High Resolution GCMs. Bracknell, UK: Hadley Centre for Climate Prediction and Research. 47. Giorgi, F., and M. R. Marinucci, 1996. “Improvements in the simulation of surface climatology over the european region with a nested modeling system.” Geophysical Research Letters, 23: 273–276. 48. Christensen, J. H., B. Machenhauer, R. G. Jones, C. Schar, P. M. Ruti, M. Castro and G. Visconti, 1997. “Validation of present-day regional climate simulations over Europe: LAM simulations with observed boundary conditions.” Climate Dynamics, 13: 489–506. 49. Jones, R. G., J. M. Murphy, M. Noguer and A. B. Keen, 1997. “Simulation of climate change over Europe using a nested regionalclimate model II. Comparison of driving and regional model responses to a doubling of carbon dioxide.” Quarterly Journal of the Royal Meteorological Society, 123: 265–292. 50. Von Storch, H., E. Zorita and U. Cubasch, 1993. “Downscaling of global climate-change estimates to regional scales – an application to Iberian rainfall in wintertime.” Journal of Climate, 6: 1161–1171. 51. Corte-Real, J., X. Zhang and X. Wang, 1995. “Downscaling GCM information to regional scale: a non-parametric multivariate regression approach.” Climate Dynamics, 11: 413–424. 52. Hewitson, B. C., and R. G. Crane, 1996. “Climate downscaling: techniques and application.” Climate Research, 7: 85–95. 53. Kidson, J. W., and C. S. Thompson, 1998. “A comparison of statistical and model-based downscaling techniques for estimating local climate variations.” Journal of Climate, 11: 735–753. 54. Wilby, R. L., T. M. L. Wigley, D. Conway, P. D. Jones, B. C. Hewitson, J. Main and D. S. Wilks, 1998. “Statistical downscaling of general circulation model output: A comparison of methods.” Water Resources Research, 34: 2995–3008. 55. Huth, R., 1999. “Statistical downscaling in central Europe: evaluation of methods and potential predictors.” Climate Research, 13: 91–101. 56. Von Storch, H., 1999. “On the use of ‘inflation’ in statistical downscaling.” Journal of Climate, 12: 3505–3506. 57. Conway, D., and P. D. Jones, 1998. “The use of weather types and air flow indices for GCM downscaling.” Journal of Hydrology, 213: 348–361.
176 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
58. Zorita, E., and H. von Storch, 1999. “The analog method as a simple statistical downscaling technique: Comparison with more complicated methods.” Journal of Climate, 12: 2474–2489. 59. Wilks, D. S., and R. L. Wilby, 1999. “The weather generation game: a review of stochastic weather models.” Progress in Physical Geography, 23: 329–357. 60. Wilks, D. S., 1999. “Multisite downscaling of daily precipitation with a stochastic weather generator.” Climatic Research, 11: 125–136. 61. Schubert, S., 1998. “Downscaling local extreme temperature changes in south-eastern Australia from the CSIRO Mark2 GCM.” International Journal of Climatology, 18: 1419–1438. 62. Mati, B. M., 2000. “The influence of climate change on maize production in the semi-humid-semi-arid areas of Kenya.” Journal of Arid Environments, 46: 333–344. 63. Naden, P. S., and C. D. Watts, 2001. “Estimating climate-induced change in soil moisture at the landscape scale: An application to five areas of ecological interest in the UK.” Climatic Change, 49: 411–440. 64. Buma, J., and M. Dehn, 2000. “Impact of climate change on a landslide in South East France, simulated using different GCM scenarios and downscaling methods for local precipitation.” Climatic Research, 15: 69–81. 65. Huth, R., J. Kysely and L. Pokorna, 2000. “A GCM simulation of heat waves, dry spells, and their relationships to circulation.” Climatic Change, 46: 29–60. 66. Riedo, M., D. Gyalistras, A. Fischlin and J. Fuhrer, 1999. “Using an ecosystem model linked to GCM-derived local weather scenarios to analyse effects of climate change and elevated CO2 on dry matter production and partitioning, and water use in temperate managed grasslands.” Global Change Biology, 5: 213–223. 67. Wilby, R. L., L. E. Hay and G. H. Leavesley, 1999. “A comparison of downscaled and raw GCM output: implications for climate change scenarios in the San Juan River basin, Colorado.” Journal of Hydrology, 225: 67–91. 68. Hewitson, B. C., and R. G. Crane, 1998. “Regional scale daily precipitation from downscaling of data from the GENESIS and UKMO GCMs. 14th Conference on Probability and Statistics in the Atmospheric Sciences.” Phoenix, Arizona: American Meteorological Society: J48–J50. 69. Hay, L. E., and G. J. McCabe Jr., 1992. “Use of weather types to dis-aggregate general circulation model predictions.” Journal of Geophysical Research, 97D: 2781–2790. 70. Schubert, S., and A. Henderson-Sellers, 1997. “A statistical model to downscale local daily temperature extremes from synoptic-scale atmospheric circulation patterns in the Australian region.” Climate Dynamics, 13: 223–234.
SCALING IN INTEGRATED ASSESSMENT 177
71. Goodess, C. M., and J. P. Palutikof, 1998. “Development of daily rain-fall scenarios for southeast Spain using a circulation-type approach to downscaling.” International Journal of Climatology, 18: 1051–1083. 72. Charles, S. P., B. C. Bates and J. P. Hughes, 1999. “A spatiotemporal model for downscaling precipitation occurrence and amounts.” Journal of Geophysical Research – Atmosphere, 104: 31657–31669. 73. Hayhoe, H. N., 2000. “Improvements of stochastic weather data generators for diverse climates.” Climate Research, 14: 75–87. 74. Semenov, M. A., and R. J. Brooks, 1999. “Spatial interpolation of the LARS-WG stochastic weather generator in Great Britain.” Climate Research, 11: 137–148. 75. Bergstrom, S., B. Carlsson, M. Gardelin, G. Lindstrom, A. Pettersson and M. Rummukainen, 2001. “Climate change impacts on runoff in Sweden – assessments by global climate models, dynamical down-scaling and hydrological modelling.” Climatic Research, 16: 101–112. 76. Whetton, P. H., J. J. Katzfey, K. J. Hennessy, X. Wu, J. L. McGregor and K. Nguyen, 2001. “Developing scenarios of climate change for Southeastern Australia: an example using regional climate model output.” Climatic Research, 16: 181–201. 77. Mearns, L. O., F. Giorgi, L. McDaniel and C. Shields, 1995. “Analysis of variability and diurnal range of daily temperature in a nested regional climate model – comparison with observations and doubled CO2 results.” Climate Dynamics, 11: 193–209. 78. Laprise, R., D. Caya, M. Giguere, G. Bergeron, G. J. Boer and N. A. McFarlane, 1998. “Climate and climate change in western Canada as simulated by the Canadian regional climate model.” AtmosphereOcean, 36: 119–167. 79. Giorgi, F., L. O. Mearns, C. Shields and L. McDaniel, 1998. “Regional nested model simulations of present day and 2xCO2 climate over the central plains of the US.” Climatic Change, 40: 457–493. 80. May, W., and E. Roeckner, 2001. “A time-slice experiment with the ECHAM4 AGCM at high resolution: the impact of horizontal resolution on annual mean climate change.” Climate Dynamics, 17: 407–420. 81. Jamieson, P. D., J. R. Porter, J. Goudriaan, J. T. Ritchie, H. V. Keulen and W. Stol, 1998. “A comparison of the models AFRCWHEAT2, CERES-Wheat, Sirius, SUCROS2 and SWHEAT with measurements from wheat growth under drought.” Field Crops Research, 55: 23–44. 82. Landau, S., R. A. C. Mitchell, V. Barnett, J. J. Colls, J. Craigon, J. L. Moore and R. W. Paynes, 1998. “Testing winter wheat simulation models’ predictions against observed UK grain yields.” Agricultural and Forest Meteorology, 89: 85–99.
178 SCALING METHODS IN REGIONAL INTEGRATED ASSESSMENTS
83. Morgan, M. G., and D. W. Keith, 1995. “Climate-change – subjective judgments by climate experts.” Environmental Science & Technology, 29: A468–A476. 84. Shackley, S., P. Young, S. Parkinson and B. Wynne, 1998. “Uncertainty, complexity and concepts of good science in climate change modelling: Are GCMs the best tools?” Climatic Change, 38: 159–205. 85. Hulme, M., and T. C. Carter, 1999. Representing uncertainty in climate change scenarios and impact studies. In T. Carter, M. Hulme and D. Viner (eds.). Representing uncertainty in climate change scenarios and impact studies – ECLAT-2 red workshop report. Norwich: Climatic Research Unit: in preparation. 86. Katz, R. W., 1999. Techiques for estimating uncertainty in climate change scenarios and impact studies. In T. Carter, M. Hulme and D. Viner (eds.). Representing uncertainty in climate change scenarios and impact studies – ECLAT-2 red workshop report. Norwich: Climatic Research Unit: 25pp. 87. Schneider, S. H., W. Turner and H. Garriga-Morehouse, 1999. “Imaginable surprise in global change science.” Journal of Risk Research, 1: 165–185. 88. New, M. G., and M. Hulme, 2000. “Representing uncertainty in climate change scenarios: a monte-carlo approach.” Integrated Assessment, in press. 89. New, M. G., 2000. Uncertainties about extremes and variability – working group IIc report. In J. Beersma and D. Viner (eds.). Climate scenarios for water-related and coastal impacts. Proceedings of the Third ECLAT-2 Workshop. Norwich, UK: University of East Anglia: in press.
9 Strategic Cyclical Scaling: Bridging Five Orders of Magnitude Scale Gaps in Climatic and Ecological Studies 1
2
TERRY L. ROOT AND STEPHEN H. SCHNEIDER 1 Center for Environmental Science and Policy, Institute for International Studies, Stanford University, United States 2 Department of Biological Sciences and the Institute for International Studies, Stanford University, United States
Scaling Paradigms in Modeling Coupled Systems Integrated assessments of global change disturbances involve “end-to end” analyses of relationships and data from physical, biological and social sciences (e.g., see the reviews and references in Weyant et al. [1], Morgan and Dowlatabadi [2], Rotmans and van Asselt [3], Parson [4], Rothman and Robinson [5], Schneider [6]). Often, data or processes are collected or simulated across vastly different scales – for example, consumption at national scales and consumer preferences at family scales, or species competition at field plots the size of a tennis court and species range boundaries at the scale of a half continent, or thunderstorms at ten kilometers and the grid cells of a global climate model at hundreds of kilometers, or the response of an experimental plant in a meter-square chamber to increased concentrations of CO2 but a prediction of ecosystem response to CO2 at biome scales of a thousand kilometers. Not only must individual disciplines concerned with the impacts of global change disturbances – like altered atmospheric composition or land use and land cover changes – often deal with five orders of magnitude difference in spatial scales, but integrated studies must bridge scale gaps across disciplinary boundaries as well. For instance, how can a conservation biologist interested in the impacts of climate change on a mountaintoprestricted species scale down climate change projections from a climate model whose smallest resolved element is a grid square 250 kilometers on a side? Or, how can a climate modeler scale up knowledge of evapotranspiration through the sub-millimeter-sized stomata of forest leaves into the hydrological cycle of the climate model resolved at hundreds of kilometers? The latter problem is known as up-scaling (see e.g., Harvey [7]), and the former one,
180 STRATEGIC CYCLICAL SCALING
downscaling (see e.g., Easterling et al. [8]). This cross-disciplinary aspect can be particularly daunting when different scales are inherent in different sub-disciplines with different traditions and methods – particularly likely in crossing natural and social scientific boundaries. Only a greater understanding of the methods and traditions of each of these sub-disciplines by practitioners in the others will likely help to facilitate that kind of epistemic boundary bridging across very different disciplines operating at very different scales. Scaling in Natural Science Forecast Models. First, let us consider natural scientific scale bridging. The ideal for a credible forecasting model is to solve analytically a validated, process-based set of equations accounting for the interacting phenomena of interest. The classical reductionist philosophy in science is a belief that the laws of physics, for example, apply to phenomena at all scales. Thus, in principle, if such laws can be found (usually at small scales), then the solution of the equations that represent such laws will provide reliable forecasts at all scales. This assumes, of course, all significant phenomena are treated by the laws used in making the forecast. Most climatic models, for example, are developed with the philosophy that solutions to the energy, momentum and mass conservation equations should, in principle, provide a credible forecasting tool. Of course, as all climate modelers have readily admitted for decades (e.g., SMIC [9], IPCC [10]), this “first principles,” bottom-up approach suffers from a fundamental practical limitation: the coupled non-linear equations that describe the physics of the air, seas and ice are far too complex to be solved by any known (or foreseeable) analytic technique. Therefore, approximation techniques are applied in which the continuous differential equations (i.e. the laws upon which small scale physical theory comfortably rest) are replaced with discrete, numerical finite difference equations. The smallest resolved spatial element of such discrete models is known as a grid cell. Because the grid cells are larger than important small-scale phenomenon, such as the condensation of water vapor into clouds or the influence of a tall mountain on wind flow or the evapotranspiration from a patch of forest, “sub-grid scale” phenomena cannot be explicitly included in the model. In order to incorporate implicitly the effects of important sub-grid scale phenomenon into a model, top-down techniques are used, in which a mix of empiricism and fine-resolution, scale-up submodels are applied. This defines a parametric representation (or “parameterization”) of the influence of sub-grid scale processes at large scales (e.g., grid size) as a function of variables that are resolved at the grid scale. A functional form is defined with free parameters that are calibrated to predict the effects of unresolved, sub-grid scale phenomena by associating them with grid-boxed averaged “large scale” variables. Determining whether it is even possible in principle to find valid parameterizations has occupied climate modelers for decades [9]. In order to estimate the ecological consequences at small scales of hypothesized climate change, a researcher must first translate the large-scale
SCALING IN INTEGRATED ASSESSMENT 181
climate-change forecast to a smaller-scale study region. This means, roughly speaking, translating climate information at a 500 × 500 km grid scale to, perhaps, a 50 × 50 M field plot – a ten-thousand-fold extrapolation! Therefore, how could climatologists map grid scale projections to landscapes and even smaller areas? At the outset, one might ask why the atmospheric component of such detailed climate models, also known as general circulation models (GCMs), use such coarse horizontal resolution as hundreds of kilometers by hundreds of kilometers? This is easy to understand given the practical limitations of modern, and even foreseeable, computer hardware resources (e.g., Trenberth [11]). A 50 × 50 km resolution is in the range known as “the mesoscale” in meteorology. If such a resolution were applied over the entire earth, then the amount of computation time needed on one of today’s “super computers” to run a year’s worth of weather would be on the order of many days. And, 50 km is still roughly two orders of magnitude greater than the size of a typical cloud and three orders of magnitude greater than the typical scale of an ecological study plot and even more orders of magnitude larger than a dust particle on which raindrops condense. Therefore, in the foreseeable future, climate-change information inevitably will not be produced directly from the grid cells of climate models at the same scale that most ecological information is gathered by the “scale-up” approach, nor will climate models be able to transcend the problem of unresolved sub-grid scale phenomena, such as cloudiness or evapotranspiration from plants. Likewise, ecological modelers who attempt to be building models using “first principles” must also utilize top-down parameterizations. However, the usual scale mismatch between climate and ecological models is why some ecologists have sought to increase the number of large-scale ecological studies and some climatologists are trying to shrink the grid size of climate models. We argue that both are required, along with techniques to bridge the scale gaps, which unfortunately will exist for at least several more decades [12]. Finally, to mobilize action to correct potential risks to environment or society, it is often necessary to establish that a discernible trend has been detected in some variable of importance – the first arrival of a spring migrant or the latitudinal extent of the sea ice boundary for example – and that that trend can be attributed to some causal mechanism – a warming of the globe from anthropogenic greenhouse gases increases, for example. Pure association of trends in some variable of interest are not, by themselves, sufficient to attribute any detectable change above background noise levels to any particular cause – explanatory mechanistic models are needed and the predictions from such models should be consistent with the observed trend before a high confidence can be assessed that a particular impact can be pinned on any suspected causal agent. We will argue that conventional scaling paradigms – top-down associations among variables believed to be cause and effect;
182 STRATEGIC CYCLICAL SCALING
bottom-up mechanistic models run to predict associations but for which there is no large-scale data time series to confirm – are not by themselves sufficient to provide high confidence in cause and effect relationships embedded in integrated assessments. Rather, we will argue that a cycling between top-down associations and bottom-up mechanistic models are needed. Moreover, we cannot assign high confidence to cause-and-effect claims until repeated cycles of testing in which mechanistic models predict and large scale data “verifies” and there is also a considerable degree of convergence in the cycling . We have called [13] this iterative cycling process “strategic cyclical scaling” (SCS), and elaborate on it a number of times in this article. The SCS paradigm has two motivations: (1) better explanatory capabilities for multi-scale, multi-component interlinked environmental (e.g., climateecosystem interactions or behavior of adaptive agents in responding to the advent or prospect of climatic changes) and (2) more reliable impact assessments and problem-solving capabilities – predictive capacity – as has been requested by the policy community. Bottom-up and Top-down Paradigms. The first standard paradigm is often known as “scale-up” or “bottom-up” or perhaps “micro” scale analysis. This is the idealized “first principles” approach attempted by most theoretical studies. That is, empirical observations made at small scales are used to determine possible mechanistic associations or “laws” that are then extrapolated to predict responses at a broad range of scales, particularly larger-scale responses. The second standard paradigm is often referred to as “scale-down” or “top-down” or “macro” scale analysis. For an ecological example, the correlation between biogeographic patterns (e.g., species range limits) and large-scale environmental variables (e.g., temperature, soil type) provides a means of predicting possible ecological responses to climate change for a broad range of scales, including smaller-scale responses. Each of these paradigms has been used extensively and we will cite below key examples of their applications to assessments of possible ecological consequences of anthropogenic disturbances, with a focus on global climatic change. Deficiencies of the singular use of either top-down or bottom-up models, has led to well-known criticisms – also exemplified below. For scale-up, the primary problem is that some of the most conspicuous aspects of a system observable at the smaller scales may not easily reveal the dominant processes that generate large-scale patterns. The mechanisms creating larger-scale responses can be obscured in noisy and/or unrelated, local variations. This often leads to an inability to detect at small scales a coherent pattern of associations (i.e. mechanisms) among variables needed for impact assessments at large scales [14]. Scale-down approaches suffer because of the possibility that the discovered associations at large scales are statistical artifacts that do not, even implicitly, reflect the causal mechanisms that are needed to provide reliable forecasting [15].
SCALING IN INTEGRATED ASSESSMENT 183
Strategic Cyclical Scaling. This led us, therefore, to describe a third, less formalized paradigm, “Strategic Cyclical Scaling” (SCS). That is, macro and micro approaches are cyclically applied in a strategic design that addresses a practical problem: in our original context, the ecological consequences of global climatic change. The paradigm can be applied to many aspects of integrated assessments as well. Large-scale associations are used to focus small-scale investigations in order to develop valid causal mechanisms generating the large-scale relationships. Such mechanisms then become the systems-scale “laws” that allow more credible forecasts of the consequences of global change disturbances. “Although it is well understood that correlations are no substitute for mechanistic understanding of relationships,” Levin [16] observed, “correlations can play an invaluable role in suggesting candidate mechanisms for (small-scale) investigation.” SCS, however, is not only intended as a two-step process, but rather a continuous cycling between large- and small-scaled studies with each successive investigation building on previous insights from all scales. In other words, SCS involves the continuous refinement of predictive models by cycling between strategically designed largeand small-scaled studies, each building on previous work at large and small scales, repeatedly tested by data at both large and small scales to the extent they are available. This paradigm is designed to enhance the credibility of the overall assessment process, including policy analyses, which is why it is labeled “strategic.” We believe that SCS is a more scientifically viable and cost-effective means of improving the credibility of integrated assessment, when compared to isolated pursuit of either the scale-up or scale-down method. Knowing when the process has converged is a very difficult aspect of applying SCS, for that requires extensive testing against some applicable data that describes important aspects of the system being modeled. When the system is asked to project the future state of the socio-environment system, then there is no empirical data, only analogies from past data to use for testing. Therefore, assessing “convergence” will require judgments as well as empirical determinations.
Ecological Responses to Climate Changes as Scaling Examples Bringing climatic forecasts down to ecological applications at local and regional scales is one way to bridge the scale gap across ecological and climatological studies. Ecologists, however, have also analyzed data and constructed models that apply over large scales, including the size of climatic model grids. A long tradition in ecology has associated the occurrence of vegetation types or the range limits of different species with physical factors such as temperature, soil moisture, land-sea boundaries, or elevation (e.g., Andrewartha and Birch [17]). Biogeography is the field that deals with such associations, and its results have been applied to estimate the large-scale ecological response to climate change.
184 STRATEGIC CYCLICAL SCALING
Predicting Vegetation Responses to Climate Change. The Holdridge [18] life-zone classification assigns biomes (for example, tundra, grassland, desert, or tropical moist forest) according to two measurable variables, temperature and precipitation. Other more complicated large-scale formulas have been developed to predict vegetation patterns from a combination of large-scale predictors (for example, temperature, soil moisture, or solar radiation); vegetation modeled includes individual species [19], limited groups of vegetation types [20], or biomes [21, 22, 23]. These kinds of models predict vegetation patterns that represent the gross features of actual vegetation patterns, which is an incentive to use them to predict vegetation change with changing climate. As we explore in more detail later, such models have limitations. One criticism of such large-scale approaches is that, although the climate or other large-scale environmental factors are favorable to some biome that is actually present, these approaches also often predict vegetation to occur where it is absent – so-called commission errors. Other criticisms are aimed at the static nature of such models, which often predict vegetation changes to appear instantaneously at the moment the climate changes, neglecting transient dynamics that often cause a sequence or succession of vegetation types to emerge over decades to centuries following some disturbance (for example, fire), even in an unchanging climate. More recently, dynamic global vegetation models (DGVMs) have been developed to attempt to account for transitional dynamics of plant ecosystems (e.g., Foley et al. [24], Prentice et al. [25]). Predicting Animal Responses to Climate Change. Birds. Scientists of the U.S. Geological Survey, in cooperation with Canadian scientists, conduct the annual North American Breeding Bird Survey, which provides distribution and abundance information for birds across the United States and Canada. From these data, collected by volunteers under strict guidance from the U.S. Geological Survey, shifts in bird ranges and abundances can be examined. Because these censuses were begun in the 1960’s, these data can provide a wealth of baseline information. Price [26] has used these data to examine the birds that breed in the Great Plains. By using the present-day ranges and abundances for each of the species, Price derived large-scale, empiricalstatistical models based on various climate variables (for example, maximum temperature in the hottest month and total precipitation in the wettest month) that provided estimates of the current bird ranges. Then, by using a general circulation model to forecast how doubling of CO2 would affect the climate variables in the models, he applied the statistical models to predict the possible shape and location of the birds’ ranges. Significant changes were found for nearly all birds examined. The ranges of most species moved north, up mountain slopes, or both. The empirical models assume that these species are capable of moving into these more northerly areas, that is, if habitat is available and no major barriers exist. Such shifting of ranges could cause local extinctions in the more southern portions of the birds’ ranges, and, if movement to the north is impossible, extinctions of entire
SCALING IN INTEGRATED ASSESSMENT 185
species could occur. We must bear in mind, however, that this empiricalstatistical technique, which associates large-scale patterns of bird ranges with large-scale patterns of climate, does not explicitly represent the detailed physical and biological mechanisms that could lead to changes in birds’ ranges. Therefore, the detailed maps should be viewed only as illustrative of the potential for very significant shifts with different possible doubled CO2 climate change scenarios. More refined techniques that also attempt to include actual mechanisms for ecological changes are discussed later. Herpetofauna. Reptiles and amphibians, which together are called herpetofauna (herps for short), are different from birds in many ways that are important to our discussion. First, because herps are ectotherms – meaning their body temperatures adjust to the ambient temperature and radiation of the environment – they must avoid environments where temperatures are too cold or too hot. Second, amphibians must live near water, not only because the reproductive part of their life cycle is dependent on water, but also because they must keep their skin moist because they breathe through their skin as well as their lungs. Third, herps are not able to disperse as easily as birds because they must crawl rather than fly, and the habitat through which they crawl must not be too dry or otherwise impassible (for example, high mountains or superhighways). As the climate changes, the character of extreme weather events, such as cold snaps and droughts, will also change [27], necessitating relatively rapid habitat changes for most animals. Rapid movements by birds are possible since they can fly, but for herps such movements are much more difficult. For example, Burke (personal communication) noted that during the 1988 drought in Michigan, many more turtles than usual were found dead on the roads. He assumed they were trying to move from their usual water holes to others that had not yet dried up or that were cooler (for example, deeper). For such species, moving across roads usually means high mortality. In the long term, most birds can readily colonize new habitat as climatic regimes shift, but herp dispersal (colonization) rates are slow. Indeed, some reptile and amphibian species may still be expanding their ranges north even now, thousands of years after the last glacial retreat. Burke and Root (personal communication) began analyzing North American herp ranges in an attempt to determine which, if any, are associated with climatic factors such as temperature, vegetation-greening duration, solar radiation, and so forth. Their preliminary evidence indicates that northern boundaries of some species ranges are associated with these factors, implying that climatic change could have a dramatic impact on the occurrence of herp species. It could also alter the population genetics within species since there can be genetic differences among populations with respect to climate tolerance. Many more extinctions are possible in herps than in birds because the forecasted human-induced climatic changes could occur rapidly when compared with the rate of natural climatic changes, and because the dispersal ability of most herps
186 STRATEGIC CYCLICAL SCALING
is painfully slow, even without considering the additional difficulties associated with human land-use changes disturbing their migration paths. The point of these examples in the context of our scaling issue discussion is that large-scale biogeographic associations may well be able to predict where herps would prefer to live if climate changes, but the detailed dynamics of their adjustments may lead to outcomes very different than if they somehow could just be transplanted to the new and more appropriate climate space. Transient dynamics and detailed small-scale studies are needed to be more confident that the large-scale associations will turn out to be predictive. Several reptile species could exhibit vulnerability to climatic change because of an unusual characteristic: their sex is determined by the temperature experienced as they develop inside the egg. Such temperaturedependent sex determination makes these animals uniquely sensitive to temperature change, meaning that climatic change could potentially cause dramatic range contractions due to biases in the sex ratios. For example, the European pond turtle, a species whose sex is determined by temperature, colonized England [28] and Denmark [29] during a warm period in the late Ice Age. With the return of colder temperatures, these populations rapidly disappeared. Holman (personal communication) suggested that a combination of shorter summers, which reduced available incubation time, and biased sex ratios, which were due to cooler summers, could easily have caused the swift retreat of this turtle to a more southern range. Most North American turtles are subject to temperature-dependent sex determination [30, 31]; their populations can vary over the years from 100% males to 100% females [32, 33]. Janzen [33] found that sex ratios were closely linked to mean July temperature, and he demonstrated that under conditions predicted by climate change models, populations of turtles will regularly produce only females within 50 years. In general, animals most likely to be affected earliest by climatic change are those in which populations are fairly small and limited to isolated habitat islands. As a result of human-generated landscape changes, many reptiles now fall into this category, as do many other animals. Indeed, temperaturedependent sex-determined species are especially likely to suffer from extreme sex ratio biases, and therefore their sensitivity to rapid climate change appears potentially more severe than most other animals. The latter assertion, of course, is a bottom-up projection based on mechanistic understanding of temperature-sex linkages, but this conjecture is yet to be tested at large scales where climatic changes are taking place – a step that would complete the first cycle of an SCS-oriented analysis. Other Taxa. There are estimates that a number of small mammals living near isolated mountaintops (which are essentially habitat islands) in the Great Basin would become extinct given typical global change scenarios [34]. Recent studies of small mammals in Yellowstone National Park show that statistically significant changes in both abundances and physical sizes of
SCALING IN INTEGRATED ASSESSMENT 187
some species occurred with historical climate variations (which were much smaller than most projected climate changes for the next century), but there appear to have been no simultaneous genetic changes [35]. Therefore, it is likely that climate change in the twenty-first century could cause substantial alteration to biotic communities, even in protected habitats such as Yellowstone National Park. In addition, the biomass of macro-zooplankton in waters off southern California has decreased dramatically as surface waters warmed [36]. Similarly, a study suggests that statistically the range of the Edith’s checkerspot butterfly in western North America has shifted northward and upward in association with long-term regional warming trends [37, 38]. Meta-analysis of some thousand species suggests that temperature trends of the latter few decades of the 20th century were sufficient to create a discernible impact in the traits of plants and animals widely scattered around the globe [39, 40]. These associations at large scales were established by predicting how each species should have reacted to warming based on micro studies of physiological ecology. Then, the meta-analysis showed that a vast disproportion of those species that exhibited changes changed in the direction expected from micro understanding of mechanisms. That disproportion at the large scale allowed the “discernible” statement of IPCC 2001 [39] to be scientifically credible. This has been, so far, only one cycle of SCS, but already that has allowed a confident conclusion in the assessment of climatic impacts on plants and animals.
Scaling Analysis of Ecological Responses Top-Down Approaches. The biogeographic approach summarized above is an example of a top-down technique (for example, Holdridge’s [18] life-zone classification), in which data on abundances or range limits of vegetation types or biomes are overlain on data of large-scale environmental factors such as temperature or precipitation. When associations among large-scale biological and climatic patterns are revealed, biogeographic rules expressing these correlations graphically or mathematically can be used to forecast changes in vegetation driven by given climate changes. Price’s [26] maps of the changes in bird ranges are also an example of such a top-down approach. As noted earlier, though, such top-down approaches are not necessarily capturing the important mechanisms responsible for the association. Scientists therefore strive to look at smaller scales for processes that account for the causes of biogeographic associations, in the belief that the laws discovered at smaller scales will apply at large scales as well. Bottom-Up Approaches. Small-scale ecological studies have been undertaken at the scale of a plant or even a single leaf [41] to understand how, for example, increased atmospheric CO2 concentrations might directly enhance photosynthesis, net primary production, or water-use efficiency. Most of these
188 STRATEGIC CYCLICAL SCALING
studies indicate increases in all these factors, increases that some researchers have extrapolated to predict global change impacts on ecosystems [42, 43]. To what extent can we reasonably project from experiments that use single leaves or single plants to more complex and larger environmental systems, such as an entire tundra [44] or forest ecosystem [45, 46, 47]? Forest ecosystem models driven only by global climate change scenarios in which CO2 was doubled in a global circulation model typically project dramatic alteration to the current geographic patterns of global biomes [21, 23, 48]. But when such forest prediction models are modified to explicitly account for some of the possible physiological changes resulting from doubled CO2, such as change in water-use efficiency, they use the empirical results from smallscale studies to extrapolate to whole forests. This bottom-up method dramatically reduces the percentage of land area predicted to experience biome change for any given climate change scenario [49]. Not all modelers have chosen to scale up from small scale experiments. Prentice et al. [21], for example, building on the work of McNaughton and Jarvis [50], excluded extrapolations of the effects of direct CO2/water-use efficiency from their model. At the scale of a forest covering a watershed, the relative humidity within the canopy, which significantly influences the evapotranspiration rate, is itself partly regulated by the forest. In other words, if an increase in water-use efficiency from direct CO2 effects decreased the transpiration from each tree, the aggregate forest effect would be to lower relative humidity over the watershed scale. This, in turn, would increase transpiration, thereby offsetting some of the direct CO2/water-use efficiency improvements observed experimentally at the scale of a single leaf or plant. Moreover, leaves that have reduced evapotranspiration will be warmer, and if a forest full of them is heated by the sun it can increase the surface layer temperature, driving the planetary boundary layer higher, thereby increasing the volume into which boundary layer water vapor molecules can inhabit. This too lowers the relative humidity at leaf level, which in turn increases evapotranspiration rates – another negative feedback on water-use efficiency at the forest watershed scale that would not be perceived by experiments conducted in isolated chambers or even at the scale of a few tens of meters in actual forests. Regardless of the extent to which these forestscale negative feedback effects will offset inferences made from bottom-up studies of isolated plants or small-scale field experiments, the following general conclusion emerges: the bottom-up methods may be appropriate for some processes at some scales in environmental science, but they cannot be considered credible without some sort of testing at the scale of the system under study. Schneider [51] has made the same point for climate models, as do several authors in the edited volume by Ehleringer and Field [52] for vegetation modeling. Harte et al. [53] used actual field experiments with heaters to simulate global warming as an experiment to demonstrate topdown/bottom-up connections.
SCALING IN INTEGRATED ASSESSMENT 189
Combined Top-Down and Bottom-Up Approaches. To help resolve the deficiencies of the top-down biome forest models mentioned previously, more process-based, bottom-up approaches such as forest-gap models have been developed [48, 54, 55]. These models include individual species and can calculate vegetation dynamics driven by time-evolving climatic change scenarios. Such models typically assume a random distribution of seed germination in which juvenile trees of various species appear. Whether these trees grow well or just barely survive depends on whether they are shaded by existing trees or grow in relatively well-lit gaps, what soil nutrients are available, and other environmental factors such as solar radiation, soil moisture, and temperature. Under ideal conditions, individual tree species are assigned a sigmoid (S-shaped) curve for growth in trunk diameter. So far, this approach may appear to be the desired process based, bottom-up technique, an impression reinforced by the spatial scale usually assumed, about 0.1 hectares. But the actual growth rate calculated in the model for each species has usually been determined by multiplying the ideal growth-rate curve by a series of growth-modifying functions that attempt to account for the limiting effects of nutrient availability, temperature stress, and so forth. These growthmodifying functions for temperature are usually determined empirically at a large scale by fitting an upside-down U-shaped curve, whose maximum is at the temperature midway between the average temperature of the species’ northern range limit and the average temperature of its southern range limit. Growing degree-days (the sum of the number of degrees each day of the growing season above some threshold value of temperature) are used in this scenario. In essence, this technique combines large-scale, top-down empirical pattern correlations into an otherwise mechanistic bottom-up modeling approach. Although this combined technique refines both approaches, it too has been criticized because such large-scale, top-down inclusions are not based on the physiology of individual species and lead to confusion about the fundamental and realized ranges [56]. (The fundamental range is the geographic space in which a given species could theoretically survive – for example, if its competitors were absent – and the realized range is where it actually exists.) The question then is: what limits the realized range, particularly at the southern boundary? Further, more refined models should include factors such as seed dispersal, so that plant recruitment is related to the preexisting population and is not simply the result of a random number generator in the computer code. Studies using SCS Approaches. As noted, problems with the singular use of either top-down or bottom-up methods have led to well-known criticisms. A search of the literature [53, 57, 58] provides examples of a refined approach to analyzing across large and small scales – SCS. The need to combine scales in the context of a strategic assessment (i.e. global-problem solving) was succinctly stated by Vitousek [59: p173]: “... just as ecosystem ecology has advanced in large part through the use of ecosystem-level
190 STRATEGIC CYCLICAL SCALING
measurements and experiments (i.e. scale-down), the science of global ecology is likely to develop most efficiently if it is driven by regional and global measurements designed to answer globally significant research questions.” Bird case study. The first example is gleaned from the work of one of us (TLR). One strategy for mitigating the warming of the globe by several o C by the year 2050 is for policy makers to implement an abatement policy. Such a policy, of course, could be economically damaging to some sectors. Before policy makers (or the general public, for that matter) would be willing to endorse a strong mitigation policy, they would like a sense of what the possible consequences of such warming might be. By analogy, a patient will be much more willing to take powerful drugs or make a dramatic change in lifestyle or eating habits if the physician explains a severe heart attack is probable without such changes. Humans resist change, particularly major change, unless the actual (or perceived) cost of not changing is high enough (e.g., death from a heart attack). Hence, knowing what the possible ecological “cost” of various warming scenarios is would be very helpful for policy makers [60, 61, 62, 63]. With that strategic end and systems understanding both in mind, Root [64] examined the biogeographic patterns of all wintering North American birds. Large-scale abundance data requires a veritable small army of census takers and the National Audubon Society has such “armies” amassed to facilitate the collection of the Christmas Bird Count data. Using these data, Root [65] determined that a large proportion of species have their average distribution and abundance patterns associated with various environmental factors (e.g., northern range limits and average minimum January temperature). The scaling question is: What mechanisms at small scales (e.g., competition, thermal stress, etc.) may have given rise to the large-scale associations? Root [66] first tested the hypothesis that local physiological constraints may be causing the particular large-scale temperature/range boundary associations. She used published small-scale studies on the wintering physiology of key species and determined that roughly half of the song birds wintering in North America extend their ranges no further than into regions where raising their metabolic rates to less than roughly 2.5 times their basal metabolic rate will allow them to maintain their body temperature throughout the winter nights. The actual physiological mechanisms generating this “2.5 rule” [67] required further investigation at small scales. Field and laboratory studies examining various physiological parameters (e.g., stored fat, fat-metabolizing enzymes, various hormones) are being examined on a subset of those species that were found in the large-scale study to have northern range boundaries apparently constrained by physiological mechanisms in response to nighttime minimum temperature. Several intensive small-scale studies were executed along a longitudinal transect running from Michigan to Alabama in order to examine patterns on a geographic scale. Root [58] found that the amount of stored fat (depot fat) may be limiting, in that the estimated amount of available fat at dawn under extreme conditions
SCALING IN INTEGRATED ASSESSMENT 191
was much lower for those individuals near their northern range boundary than for those in the middle of their range. To determine the relative importance between colder temperatures or longer nights and thereby fewer hours of daylight available for foraging, Root [40] has embarked on a larger regional study. In addition to the one longitudinal transect, she incorporated another transect, which runs from Iowa to Louisiana. This larger-scale design was selected based on previous small-scale studies because it allows a decoupling of the effects of day length and temperature. The decoupling, in turn, is important to the strategic problem of determining whether or not scenarios of global warming might have a large effect (e.g., if temperature proves to be more important than day length). Preliminary results are suggesting that changing temperatures, more than day length are explanatory [40]. These, in turn, suggest global temperature changes would likely cause rapid range and abundance shifts by at least some bird species. Rapid changes in the large-scale patterns (e.g., ranges) of birds are possible. Indeed, Root’s [58] finding that suggests significant annual shifts in species ranges, led to yet another large-scale, top-down study, but this time looking for associations in the year-to-year variations (rather than average range limits or abundances as before) between large-scale patterns of birds and climate variables. The first step has been to quantify the year-to-year variations of selected species. The next step is to perform time series analyses of 30 years of wintering bird abundance data with key climate variables (e.g., number of days below X° C). Preliminary analysis for only one species at two sites shows that in warmer years more individuals winter farther north than in colder years [68]. While no claim is being offered at this point in the research for the generality of those preliminary results that suggest strong and quantitative links between year-to-year changes in bird abundances and climate variability, this example does permit a clear demonstration of the SCS paradigm. However, extending this type of analysis to other taxa (reptiles in this case) may prove to be a fruitful approach. Additionally, combining such information from various taxa will allow a much better understanding of possible ecological consequences of climatic change (e.g., see IPCC [39, 60] for an update and references to the recent literature). COHMAP case study. Our first example of the use of the strategic cyclical scaling type of approach dealt primarily with a single investigator. The second example is that of a team effort, which has the advantage of entraining dozens of diverse people and facilities from many institutions, but has the disadvantage of requiring coordination of all those researchers and facilities. The COHMAP study has been noteworthy because of its important findings with regard to “no-analog” vegetation communities during the transition from ice age to interglacial about 12,000 years ago (e.g., Overpeck et al. [69]). But this large team effort went well beyond the gathering of local field data at enough sites to
192 STRATEGIC CYCLICAL SCALING
document the paleohistories of particular lakes or bogs – they compiled the local studies into large scale maps. The COHMAP researchers strategically designed their field and lab work to compliment large-scale climatic modeling studies using GCMs. Accepting the premise that climate changes from 20,000 years ago to the present were forced by changes in the Earth’s orbital geometry, greenhouse gas concentrations and sea surface temperatures, and knowing that such changes can be applied as boundary forcing conditions for GCMs, the COHMAP team used a GCM to produce 3,000-year-apart maps of changing climate from these varying boundary conditions. They used regressions to associate pollen percentages from field data with climatic variables (January and July temperatures and annual precipitation). They drew large-scale maps of fossil pollen abundance every three thousand years from 18,000 years ago to the present. The top-down formulas that relate climate change to pollen abundances were then used to predict how climate had changed. These paleoclimate maps were then compared to GCM maps to (a) help explain the causes of climatic and ecological changes, and (b) help validate the regional forecast skill of GCMs driven by specified large-scale external forcings. The latter is a practical problem of major policy significance, because the credibility of GCMs regional climatic anomaly forecasts are controversial in the context of global warming and its ecological consequences. Thus, this validation exercise is a clear strategically-focused attempt at model validation at the scale of the model’s resolution. The investigation did not end there, but cycled between previous large- and smallscale studies, which led to further predictions using GCMs. To enhance this validation exercise, Kutzbach and Street-Perrott [70] developed a regionalscale hydrological model to predict paleo-lake levels in Africa and used these coupled models to compare lake levels over the past 18,000 years computed from GCM-climates driving the hydrology model with paleo-lake shore changes inferred from fossil field data at micro scales. The comparisons between coupled GCM-hydrological models and paleo-lake data were broadly consistent, and when combined with the vegetation change map comparisons between GCM-produced pollen abundances and field data on pollen abundances, these comparisons have provided a major boost to the credibility of GCM regional projections of forced climate changes. Webb et al. [71] used the multi-institutional, multi-scale, interdisciplinary COHMAP effort, with its strategic design and the cycling between scale-up and scale-down approaches and drawing on many disciplines. Not only do the participants deserve credit for experimenting with such a progressive, strategic research design that addresses earth systems problems across many scales and cycles between scale-up and scale-down methods, but credits should also go to the many institutions that cooperated and foundations that funded this non-traditional, SCS-like effort. We believe that as long as most discipline-oriented research institutions and funding agencies remain organized in disciplinary sub-units, that many more multi-institutional projects like COHMAP that implicitly or explicitly use the SCS-like paradigm as their
SCALING IN INTEGRATED ASSESSMENT 193
interdisciplinary research design will be needed to address the ecological implications of climate change. We also believe that fundamental, structural institutional changes to foster interdisciplinary, multi-institutional research is long overdue. The Webb et al. [71] results showed that during the most rapid transition from ice age to interglacial conditions about 12,000 years ago, that large tracts of “no-analog” habitats existed, in which communities of plants had no resemblance to communities found today. This suggests that future plant communities driven by anthropogenic climate changes would also contain many no-analog components. Strategic cyclical scaling, however, is not only intended as a two-step process, but also as a continuous cycling process between large- and smallscale studies, with each successive investigation building on previous insights from all scales and with testing at all scales as an integral step in the hope of achieving some measure of convergence as further cycles are applied. This approach is designed to enhance the credibility – and thus policy utility – of the overall assessment process (see also Vitousek [59], Harte and Shaw [72]), which is why strategic is the first word in strategic cyclical scaling.
Integrated Assessment via Coupled Socio-Natural Systems Models Abrupt behavior as an emergent property of a coupled socio-natural system model for oceanic model coupled to an optimizing energy-economy model. Paleoclimate reconstruction and model simulations suggest there are multiple equilibria for thermohaline circulation (THC) in the North Atlantic (also known as the “conveyor belt”), including complete collapse of this circulation responsible for the equable climates of Europe. Switching between the equilibria can occur as a result of temperature or freshwater forcing. Thus, the pattern of THC that exists today could be modified by an infusion of fresh water at higher latitudes or through high latitude warming. These changes may occur if climate change increases precipitation, causes glaciers to melt, or warms high latitudes more than low latitudes, as is often projected [10, 39]. Further research has incorporated this behavior into coupled climateeconomic modeling, characterizing additional emergent properties of the coupled climate-economic system [73]. Again, this coupled multi-system behavior is not revealed by single-discipline sub-models alone – e.g., choices of model parameter values such as the discount rate determine whether emissions mitigation decisions made in the near-term will prevent a future THC collapse or not – clearly a property not obtainable by an economic model per se. If warming reduces the ability of surface water to sink in high latitudes, this interferes with the inflow of warm water from the south. Such a slowdown will cause local cooling – re-energizing the local sinking, serving as a stabilizing negative feedback on the slowdown. On the other hand, the initial
194 STRATEGIC CYCLICAL SCALING
slowdown of the strength of the Gulf Stream reduces the flow of salty subtropical water to the higher latitudes of the North Atlantic. This would act as a destabilizing positive feedback on the process by further decreasing the salinity of the North Atlantic surface water and reducing its density and thus further inhibiting local sinking. The rate at which the warming or freshwater forcing is applied to the coupled system could determine which of these opposing feedbacks dominates, and subsequently whether a THC collapse occurs (e.g., Schneider and Thompson [74]). Recent research efforts have connected this abrupt non-linearity to integrated assessment of climate change policy. William Nordhaus’ DICE model [75] is a simple optimal growth model. Given a set of explicit value judgments and assumptions, the model generates an “optimal” future forecast for a number of economic and environmental variables. It does this by maximizing discounted utility (satisfaction from consumption) by balancing the costs to the economy of greenhouse gas (GHG) emissions abatement (a loss in a portion of GDP caused by higher carbon energy prices) against the costs of the buildup of atmospheric GHG concentrations. This buildup affects the climate, which in turn causes “climate damage,” a reduction in GDP determined by the rise in globally averaged surface temperature due to GHG emissions. In some sectors and regions such climate damages could be negative – i.e. benefits – but DICE aggregates across all sectors and regions (see, for example, the discussions in Chapters 1 and 19 of IPCC [39]) and thus assumes that this aggregate measure of damage is always a positive cost. Mastrandrea and Schneider [73] have developed a modified version of Nordhaus’ DICE model called E-DICE, containing an enhanced damage function that reflects the higher likely damages that would result when abrupt climate changes occur. If climate changes are smooth and thus relatively predictable, then the foresight afforded increases the capacity of society to adapt, hence damages will be lower than for very rapid or less anticipated changes such as abrupt unanticipated events – “surprises” such as a THC collapse. It is likely that, even in a distant future society, the advent of abrupt climatic changes would reduce adaptability and thus increase damages relative to smoothly varying, more foreseeable changes. Since the processes that the models ignore by their high degree of aggregation require heroic parameterizations, the quantitative results are only used as a tool for insights into potential qualitative behaviors. Because of the abrupt non-linear behavior of the SCD model, the E-DICE model produces a result that is also qualitatively different from DICE with its lack of internal abrupt non-linear dynamics. A THC collapse is obtained for rapid and large CO2 increases in the SCD model. An “optimal” solution of conventional DICE can produce an emissions profile that triggers such a collapse in the SCD model. However, this abrupt non-linear event can be prevented when the damage function in DICE is modified to account for enhanced damages
SCALING IN INTEGRATED ASSESSMENT 195
created by this THC collapse and THC behavior is incorporated into the coupled climate-economy model. The coupled system contains feedback mechanisms that allow the profile of carbon taxes to increase sufficiently in response to the enhanced damages so as to lower emissions sufficiently to prevent the THC collapse in an optimization run of E-DICE. The enhanced carbon tax actually “works” to lower emissions and thus avoid future damages. Keller et al. [76] support these results, finding that significantly reducing carbon dioxide emissions to prevent or delay potential damages from an uncertain and irreversible future climate change such as THC collapse may be cost-effective. But the amount of near-term mitigation the DICE model “recommends” to reduce future damages is critically dependent on the discount rate (e.g., see Fig. 1 from Mastrandrea and Schneider [73]). Figure 9.1 is a “cliff diagram” showing the equilibrium THC overturning for different combinations of climate sensitivity and pure rate of time preference (PRTP) values. As the PRTP decreases, “normal” circulation is preserved for disproportionately higher climate sensitivities since the lower PRTP leads to larger emissions reductions in E-DICE and thus it takes a higher climate sensitivity to reach the “cliff.” Thus, for low discount rates (PRTP of less than 1.8% in one formulation – see Fig. 4 in Mastrandrea and Schneider [73]) the present value of future damages creates a sufficient carbon tax to keep emissions below the trigger level for the abrupt non-linear collapse of the THC a century later. But a higher discount rate sufficiently reduces the present value of even catastrophic longterm damages such that an abrupt non-linear THC collapse becomes an emergent property of the coupled socio-natural system – with the discount rate of the 21st century becoming the parameter that most influences the 22nd century behavior of the modeled climate. Although these highly aggregated models are not intended to provide high confidence quantitative projections of coupled socio-natural system behaviors, we believe that the bulk of integrated assessment models used to date for climate policy analysis – and which do not include any such abrupt nonlinear processes – will not be able to alert the policymaking community to the importance of abrupt non-linear behaviors. At the very least, the ranges of estimates of future climate damages should be expanded beyond that suggested in conventional analytic tools to account for such non-linear behaviors (e.g., Moss and Schneider [77]). Role of SCS in the coupled E-DICE/SCD integrated assessment model. The Mastrandrea and Schneider [73] example just presented has scale bridging – explicitly and implicitly – embedded in virtually every aspect. First of all, the DICE model uses a hypothetical economic “agent” to maximize the utility given a number of assumed conditions. This is a major scale assumption – that individual behavior is only to maximize utility defined as Nordhaus [75] has (the logarithm of consumption). Indeed, there is no SCS in this formulation, just an assumption that individual utility-consumption maximizing
196 STRATEGIC CYCLICAL SCALING
Figure 9.1: “Cliff diagram” of equilibrium THC overturning varying PRTP and climate sensitivity. Two states of the system – “normal” (20Sv) and “collapsed” (0Sv) THC – are seen here. The numbers are only for illustration as a several parameters relevant to the conditions in which the THC collapse osccurs are not varied across their full range in this calculation, which is primarily shown to illustrate the emergent property of high sensitivity to discounting in a coupled socio-natural model (Source: Mastrandrea and Schneider [73]).
behaviors of some can be scaled up to a global agent that utility maximizes. An SCS approach could have been (based on micro studies of individual behaviors) to modify the agency formulation such that as people got richer they changed their fondness for material consumption and their preferences switched to other attributes – equity or nature protection, perhaps. Clearly, such an integrated assessment model as DICE has not yet begun to exploit the possibilities for alternative formulations via an SCS approach. Second, the DICE integrated assessment model assumes that people – that is, their agent – discount with a fixed social rate of time preference. Some empirical studies at micro levels suggest that people do not discount via standard exponential formulae, but rather use hyperbolic discounting (e.g., Heal [78]) – a very high initial discount rate, but a diminishing rate for far distant events. This formulation would substantially increase the present value of catastrophic events like a THC collapse in the 22nd century, as is shown in one of the Mastrandrea and Schneider [73] cases. That, in turn, leads to much higher “optimal” carbon control rates and thus reduced likelihood of collapsed THC in the distant future. Again, this scale-up assumption for discounting in DICE is not treated via SCS in the current formulation, but could be if the modeling design were to explicitly account for how agents might behave given the broad set of preferences in different societies (e.g., see Van Asselt and Rotmans [79]) or for alternative future states of the simulation.
SCALING IN INTEGRATED ASSESSMENT 197
Additionally, the ocean model is a reduced form (a scale-up) representation of a micro law – salty and colder water is denser than warmer and fresher water. But SCS is not entirely absent in this example, since the parameters that are used in the THC overturning model derived from micro laws like the oceanic density formula were obtained by adjusting the performance of the simple model to reproduce the behaviors of much more comprehensive GCMs. These GCMs do cycle between large and small scales in the determination of their parametric representations of sub-grid scale phenomena, and thus their use to “tune” the SCD model via adjusting its free parameters to obtain behaviors similar to the more complex models does involve cycling across scales. Clearly, more refined formulations of coupled socio-natural macro models to include better micro representation of agency, discounting and definitions of utility that extend beyond material consumption are badly needed in the next generation of such integrated assessment models that attempt to include abrupt system changes (see e.g., Table 2 in Schneider [6]). Social dimensions, such as the scaling of understanding from the levels of individual cognition to social class to institutional organizations, have only begun to be considered in integrated assessment modeling, Further refinements in the natural system sub-models could include (a), better treatment of moisture transport into the North Atlantic region based on smaller scale analyses or (b), micro damage functions built from the bottom up – for example explicit representation of fisheries, forests or agriculture in a Europe cooled by THC collapse – rather than a simple top-down aggregated damage function in which GDP loss is proportional to the square of the warming (the DICE formulation). Further disaggregation into regional resolution for both socio and natural sub-models would add another layer of cross-scale integration, and SCS would again be a technique to help design alternative formulations – as has already been attempted in regional integrated assessment models like IMAGE (e.g., Alcamo [80]) to study climate change – but in the context of smooth, rather than abrupt, variation modes. Conclusions We have suggested that progress in bridging orders of magnitude differences in scale may be aided by use of the cycling across scales in which micro information of processes and mechanisms is used to make predictions at larger scales, and then data at larger scales is used to test to predictions, after which future micro refinements are performed in light of the testing at macro levels. We show that this process is easiest to apply when the distances across the disciplines that are coupled is not too great – within ecology or ecology coupled to climate – our prime examples developed above. We also suggest – and give an example – that this becomes more difficult in practice when natural and social scientific sub-models are coupled – at least until an interdisciplinary epistemic community emerges in which each sub-discipline learns
198 STRATEGIC CYCLICAL SCALING
enough about the methods and traditions of the other sub-disciplines to communicate meaningfully. We also note that although convergence of cycling across scales may occur for some problems, where fundamental data is lacking to test – at micro or macro scales – or where functional relationships among variables are still highly uncertain, convergence may not be easily obtained. It is difficult to fashion a set of rules for applying SCS, but clearly the keys are to have (a) a reasonable idea of processes/mechanisms at smaller scales (b) some relevant data sets at large scales to test the predictions of models built on the micro level understanding, and (c) the development and fostering of interdisciplinary teams, and eventually, interdisciplinary communities, capable of unbiased peer reviewing of cross-scale, cross-disciplinary analyses in which the bulk of the originality is in the integrative aspects, rather than advances in the subdisciplines that are coupled. Several of the contributions in this volume are excellent examples of the progress that is being made in fostering the development of such an interdisciplinary community, progress that is essential to the growth and credibility of the integrated assessment of climate change.
References 1.
2. 3.
4.
5.
6.
Weyant, J., O. Davidson, H. Dowlatabadi, J. Edmonds, M. Grubb, E. A. Parson, R. Richels, J. Rotmans, P. R. Shukla, and R. S. J. Tol, 1996. Integrated assessment of climate change: An overview and comparison of approaches and results. In: J. P. Bruce, H. Lee, E. F. Haites (eds.). Climate Change 1995. Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge: Cambridge University Press: 367–396. Morgan, M. G., and H. Dowlatabadi, 1996. “Learning from integrated assessment of climate change.” Climatic Change, 34: 337–68. Rotmans, J., and M. van Asselt, 1996. “Integrated assessment: a growing child on its way to maturity – an editorial.” Climatic Change, 34: 327–336. Parson, E. A., 1996. “Three Dilemmas in the Integrated Assessment of Climate Change. An Editorial Comment.” Climatic Change, 34: 315–326. Rothman, D. S., and J. B. Robinson, 1997. “Growing Pains: A Conceptual Framework for Considering Integrated Assessments.” Environmental Monitoring and Assessment, 46: 23–43 Schneider, S. H., 1997. “Integrated Assessment Modeling of Global Climate Change: Transparent Rational Tool for Policy Making or Opaque Screen Hiding Value-laden Assumptions?” Environmental Modeling and Assessment, 2, No. 4: 229–248.
SCALING IN INTEGRATED ASSESSMENT 199
7. 8.
9. 10.
11. 12.
13. 14.
15.
16.
17.
18. 19.
20.
Harvey, L. D., 2000. “Upscaling in Global Change Research.” Climatic Change, Dordrecht: Kluwer Academic Publishers: 44: 223, Feb.. Easterling, W. E., L. O. Mearns, and C. Hays, 2001. :Comparison of agriculture impacts of climate change calculated from high and low resolution climate model scenarios. Part II: The effect of adaptations.” Climatic Change, 51: 173–197. Study of Man’s Impact on Climate (SMIC), 1972. Cambridge, Massachusetts: MIT Press. Intergovernmental Panel on Climate Change, 1996a. J. T. Houghton, L. G. Meira Filho, B. A. Callander, N. Harris, A. Kattenberg, and K. Maskell (eds.). Climate Change 1995 – The Science of Climate Change. The second assessment report of the IPCC: contribution of working group I. Cambridge, England: Cambridge University Press: 572 pp. Trenberth, K. E., (ed.), 1992. Climate System Modeling. Cambridge, England: Cambridge University Press: 788 pp. Root, T. L., and S. H. Schneider, 1993. “Can large-scale climatic models be linked with multi-scale ecological studies?” ConservatiBiology, 7: 256–270. Root, T. L., and S. H. Schneider, 1995. “Ecology and climate: research strategies and implications.” Science, 269: 334–341. Dawson, T. E., and F. S. Chapin III, 1993. Grouping plants by their form-function characteristics as an avenue for simplification in scaling between leaves and land scapes. In: J. R. Ehleringer and C. B. Field (eds.). Scaling Physiological Processes: Leaf to Globe. New York: Academic Press: 313–319. Jarvis, P. G., 1993. Prospects for bottom-up models. In: J. R. Ehleringer and C. B. Field (eds.). Scaling Physiological Processes: Leaf to Globe. New York: Academic Press. Levin, S. A., 1993. Concepts of scale at the local level. In: J. R. Ehleringer and C. B. Field (eds.). Scaling Physiological Processes: Leaf to Globe. New York: Academic Press. Andrewartha, H. G, and L. C. Birch, 1954. The Distribution and Abundance of Animals. Chicago, Illinois, USA: University of Chicago Press. Holdridge, L. R., 1967. Life Zone Ecology. Tropical Science Center, San José, Costa Rica. Davis, M. B., and C. Zabinski, 1992. Changes in geographical range resulting from greenhouse warming effects on biodiversity in forests. In: R. L. Peters and T. E. Lovejoy (eds.). Global warming and biological diversity. New Haven, Connecticut: Yale University Press. Box, E. O., 1981. Macroclimate and Plant Forms: An Introduction to Predictive Modeling in Phytogeography. The Hague: Dr W. Junk Publishers.
200 STRATEGIC CYCLICAL SCALING
21. Prentice, I. C., 1992. Climate change and long-term vegetation dynamics. In: D. C. Glenn-Lewin, R. A. Peet, and T. Veblen (eds.). Plant Succession: Theory And Prediction. New York: Chapman & Hall. 22. Melillo, J. M., A. D. McGuire, D. W. Kicklighter, B. Moore III, C. J. Vorosmarty, and A. L. Schloss, 1993. “Global climate change and terrestrial net primary production.” Nature, 63: 234–240. 23. Neilson, R. P., 1993. “Transient ecotone response to climatic change: some conceptual and modelling approaches.” Ecological Applications, 3: 385–395. 24. Foley, J. A., S. Levis, I. C. Prentice, D. Pollard, and S. L. Thompson, 1998. “Coupling dynamic models of climate and vegetation.” Global Change Biology, 4: 561–579. 25. Prentice, I. C., W. Cramer, S. P. Harrison, R. Leemans, R. A. Monserud, and A. M. Solomon, 1992. “A global biome model based on plant physiology and dominance, soil properties and climate.” Journal of Biogeography, 19: 117–134. 26. Price, J., 1995. Potential Impacts of Global Climate Change on the Summer Distribution of Some North American graSslands Birds. Ph.D. dissertation, Wayne State University: Detroit, Michigan: 540 pp. 27. Karl, T. R., R. W. Knight, D. R. Easterling, and R. G. Quayle, 1995. “Trends in U.S. climate during the twentieth century.” Consequences, 1: 3–12. 28. Stuart, A. J., 1979. “Pleistocene occurrences of the European pond tortoise (Emys orbicularis L.) in Britain.” Boreas, 8: 359–371. 29. Degerbolt, M., and H. Krog, 1951. Den europo-iske Sumpskildpadde (Emys orbicularis L.) in Danmark. Kobenhavn: C. A. Reitzels Forlag. 30. Ewert, M. A., and C. E. Nelson, 1991. “Sex determination in turtles: diverse patterns and some possible adaptive values.” Copeia 1991: 50–69. 31. Ewert, M. A., D. R. Jackson, and C. E. Nelson, 1994. “Patterns of temperature dependent sex determination in turtles.” Journal of Experimental Zoology, 270: 3–15. 32. Mrosovsky, N., and J. Provancha, 1992. “Sex ratio of hatchling loggerhead sea turtles: data and estimates from a 5-year study.” Canadian Journal of Zoology, 70: 530 538. 33. Janzen, F. J., 1994. “Climate change and temperature-dependent sex determination in reptiles.” Proceedings of the National Academy of Sciences, U.S.A., 91: 7487–7490. 34. MacDonald, K. A., and J. H. Brown, 1992. “Using montane mammals to model extinctions due to global change.” Conservation Biology, 6: 409–425. 35. Hadley, E. A., 1997. “Evolutionary and ecological response of pocket gophers (Thomomus talpoides) to late-Holocene climate change.” Biological Journal of the Linnean Society, 60: 277–296.
SCALING IN INTEGRATED ASSESSMENT 201
36. Roemmich, D., and J. McGowan, 1995. “Climatic warming and the decline of zooplankton in the California current.” Science, 267: 1324–1326. 37. Parmesan, C., 1996. “Climate and species’ range.” Nature, 382: 765–766. 38. Parmesan, C., T. L. Root, and M. R. Willig, 2000. “Impacts of extreme weather and climate on terrestrial biota.” Bulletin of the American Meteorological Society, 40: 443–450. 39. IPCC, 2001. Climate Change 2001: Impacts Adaptation, and Vulnerability, Cambridge, UK: Cambridge University Press. 40. Root, T. L., and S. H. Schneider, 2002. Climate Change: Overview and Implications for Wildlife. In: S. H. Schneider and T. L. Root (eds.). Wildlife Responses to Climate Change: North American Case Studies, National Wildlife Federation, Washington D.C.: Island Press: 1–56. 41. Idso, S. B., and B. A. Kimball, 1993. “Tree growth in carbon dioxide enriched air and its implications for global carbon cycling and maximum levels of atmospheric CO2.” Global Biogeochemical Cycle, 7: 537–555. 42. Idso, S. B., and A. J. Brazel, 1984. “Rising atmospheric carbon dioxide concentrations may increase streamflow.” Nature, 312: 51–53. 43. Ellsaesser, H. W., 1990. “A different view of the climatic effect of CO2-updated.” Atmósfera, 3: 3–29. 44. Oechel, W. C., S. Cowles, N. Grulike, S. J. Hastings, B. Lawrence, T. Prudhomme, G. Riechers, B. Strain, D. Tissue, and G. Vourlitis, 1994. “Transient nature of CO2 fertilization in Arctic tundra.” Nature, 371: 500–503. 45. Bazzaz, F. A., 1990. “The response of natural ecosystems to the rising global CO2 levels.” Annual Review of Ecology and Systematics, 21: 167–196. 46. Bazzaz, F. A., and E. D. Faijer, 1992. “Plant life in a CO2-rich world.” Scientific American, 226: 68–74. 47. DeLucia, E. H., J. G. Hamilton, S. L. Naidu, R. B. Thomas, J. A. Andrews, A. Finzi, M. Lavine, R. Matamala, J. E. Mohan, G. R. Hendrey, and W. H. Schlesinger, 1999. “Net primary production of a forest ecosystem with experimental CO2 enrichment.” Science, 284: 1177–1179. 48. Smith, T. M., H. H. Shugart, G. B. Bonan, and J. B. Smith, 1992. Modeling the potential response of vegetation to global climate change. In: F. I. Woodward (ed.). Advances in Ecological Research: the Ecological Consequences of Global Climate Change. New York: Academic Press: 93–116. 49. Vegetation/Ecosystem Modeling and Analysis Project, 1995. “Vegetation/Ecosystem Modeling and Analysis Project (VEMAP): comparing biogeography and biogeochemistry models in a continentalscale study of terrestrial ecosystem responses to climate change and CO2 doubling.” Global Biogeochemical Cycles, 9: 407–437.
202 STRATEGIC CYCLICAL SCALING
50. McNaughton, K. G., and P. G. Jarvis, 1991. “Effects of spatial scale on stomatal control of transpiration.” Agricultural and Forest Meteorology, 54: 279–301. 51. Schneider, S. H., 1979. Verification of parameterization in climate modeling. In W. L. Gates (ed.). Report of the JOC Study Conference on Climate Models: Performance, Intercomparison and Sensitivity Studies. World Meteorological Organization-International Council of Scientific Unions, 728–751. 52. Ehleringer, J. R., and C. B. Field (eds.), 1993. Scaling Physiological Processes: Leaf to Globe. New York, Academic Press, 388 pp. 53. Harte, J., M. Torn, F. R. Chang, B. Feiferek, A. Kinzig, R. Shaw, and K. Shen, 1995. “Global warming and soil microclimate: results from a meadow-warming experiment.” Ecological Applications, 5: 132–150. 54. Botkin, D. B., J. R. Janak, and J. R. Wallis, 1972. “Some ecological consequences of a computer model of forest growth.” Journal of Ecology, 60: 849–872. 55. Pastor, J., and W. M. Post, 1988. “Response of northern forests to CO2-induced climate change.” Nature, 334: 55–58. 56. Pacala, S. W., and G. C. Hurtt, 1993. Terrestrial vegetation and climate change: integrating models and experiments. In: P. Kareiva, J. Kingsolver, and R. Huey (eds.). Biotic Interactions and Global Change. Sunderland, Massachusetts: Sinauer Associates: 57–74. 57. Wright, H. E., J. E. Kutzbach, T. Webb III, W. E Ruddiman, F. A. Street-Perrott, and P. J. Bartlein (eds.), 1993. Global Climates Since the Last Glacial Maximum. Minneapolis: University of Minnesota Press. 58. Root, T. L., 1994. “Scientific/philosophical challenges of global change research: a case study of climatic changes on birds.” Proceedings of the American Philosophical Society, 138: 377–384. 59. Vitousek, P. M., 1993. Global dynamics and ecosystem processes: scaling up or scaling down? In: J. R. Ehleringer and C. B. Field (eds.). Scaling Physiological Processes: Leaf to Globe. New York: Academic Press: 169–177. 60. Intergovernmental Panel on Climate Change, 1996b. Climate Change 1995 – Impacts, Adaptations and Mitigation of Climate Change: scientific technical analysis. The second assessment report of the IPCC: contribution of working group II. Cambridge: Cambridge University Press. 61. Intergovernmental Panel on Climate Change, 1996c. Climate Change 1995 – Economic and Social dimenSions of Climate Change. The second assessment report of the IPCC: contribution of working group III. Cambridge: Cambridge University Press. 62. Smith, J. B., and D. A. Tirpak (eds.), 1990. The Potential Effects of Global Climate Change on the United States. New York, NY: Hemisphere Publishing Corporation.
SCALING IN INTEGRATED ASSESSMENT 203
63. U.S. Congress, Office of Technology Assessment, 1993. Preparing for an Uncertain Climate-Volume II, OTA-O-568. Washington D.C.: U.S. Government Printing Office. 64. Root, T. L., 1988a. Atlas of Wintering North American Birds. University of Chicago Press: I11: 312 pp. 65. Root, T. L., 1988b. “Environmental factors associated with avian distributional boundaries.” Journal of Biogeography, 15: 489 505. 66. Root, T. L., 1988c. “Energy constraints on avian distributions and abundances.” Ecology, 69: 330–339. 67. Diamond, J., 1989. “Species borders and metabolism.” Nature, 337: 692–693. 68. Schneider, S. H., and T. L. Root, 1998. Impacts of Climate Changes on Biological Resources. In: Status and Trends of the Nation’s Biological Resources, 2 vols. M. J. Mac, P. A. Opler, C. E. Puckett Haecker, and P. D. Doran (eds.). U.S. Department of the Interior: U.S. Geological Survey: Reston, VA. Vol. 1: 89–116. 69. Overpeck, J. T., R. S. Webb, and T. Webb III, 1992. “Mapping eastern North American vegetation change over the past 18,000 years: no analogs and the future” Geology, 20: 1071–1074. 70. Kutzbach, J. E., and F. A. Street-Perrott, 1985. “Milantovitch forcing of fluctions in the level of tropical lakes from 18 to 0 kys BP.” Nature, 317: 130–134. 71. Webb, T. III, W. F. Ruddiman, F. A. Street-Perrott, V. Markgraf, J. E. Kutzbach, P. J. Bartlein, H. E. Wright, Jr., and W. L. Prell, 1993. Climatic changes during the past 18,000 years: regional syntheses, mechanisms, and causes. In: H. E. Wright, Jr., J. E. Kutzbach, T. Webb III, W. F. Ruddiman, F. A. Street-Perrott, and P. J. Bartlein (eds.). Global Climates Since the Last Glacial Maximum. Minneapolis: University of Minnesota Press: 514–535. 72. Harte, J., and R. Shaw, 1995. ‘Shifting dominance within a montane vegetation community: results of a climate-warming experiment.” Science 267: 876–880. 73. Mastrandrea, M., and S. H. Schneider, 2001. “Integrated assessment of abrupt climatic changes.” Climate Policy 1: 433–449. 74. Schneider, S. H., and S. L. Thompson, 2000. A simple climate model used in economic studies of global change. In: New Directions in the Economics and Integrated Assessment of Global Climate Change. S. J. DeCanio, R. B. Howarth, A. H. Sanstad, S. H. Schneider, and S. L. Thompson (eds.). Washington, DC: The Pew Center on Global Climate Change: 59–80. 75. Nordhaus, W. D., 1994. Managing the Global Commons: The Economics of Climate Change, Cambridge, MA: MIT Press. 76. Keller, K., B. M. Bolker, and D. F. Bradford, 2000. Paper presented at the Yale/NBER/IIASA workshop on potential catastrophic impacts of climate change, Snowmass, CO.
204 STRATEGIC CYCLICAL SCALING
77. Moss, R. H., and S. H. Schneider, 2000. Uncertainties in the IPCC TAR: Recommendations to lead authors for more consistent assessment and reporting. In: Guidance Papers on the Cross Cutting Issues of the Third Assessment Report of the IPCC. R. Pachauri, T. Taniguchi, and K. Tanaka (eds.). Geneva: World Meteorological Organization: 33–51. 78. Heal, G., 1997. “Discounting and climate change.” Climatic Change, 37: 335–343. 79. Van Asselt, M. B. A., and J. Rotmans, 1995. Uncertainty in integrated assessment modeling: A cultural perspective-based approach. RIVMreport no 461502009, National Institute of Public Health and the Environment (RIVM), the Netherlands, Bilthoven. 80. Alcamo, J., (ed.), 1994. IMAGE 2.0: Integrated Modeling of Global Climate Change. Dordrecht, The Netherlands: Kluwer.
10 The Syndromes Approach to Scaling – Describing Global Change on an Intermediate Functional Scale H.-J. SCHELLNHUBER, M.K.B. LÜDEKE AND G. PETSCHEL-HELD Potsdam Institute for Climate Impact Research, Potsdam, Germany
Abstract A dynamic description of Global Change on an intermediate functional scale on the basis of approximately independent sub-models is elaborated. Sixteen of these sub-models are primarily identified as Hazardous Functional Patterns (HFPs) generating non-sustainable trajectories (Syndromes) of the civilisation/ nature system. After an “idealistic deduction” of the main concepts an iterative procedure – formally based on Qualitative Differential Equations – is introduced which allows the systematic generalisation of case study based knowledge to obtain consistent HFPs on a coarser functional scale. The method is illustrated with the Sahel HFP. Key Words: intermediate functional scale, qualitative modelling, Syndrome, Global Change
Dealing with Global Change – The Scaling Problem as One Crucial Aspect of Complexity and Uncertainty Today, it seems obvious that Global Change (GC) research has to take care of the high level of complexity present in the interactions between civilisation and nature. Complexity – synonymous for the multitude and non-linearity of the interrelations between and within all the various facets of global environmental change and its socio-economic drivers and impacts on their respective spatial, temporal and functional scales – brings about a number of generic difficulties: ■
There is no such thing as “prediction” in a strong sense, i.e. even in principle it is not possible to give exact statements like “in 2010 the global – or national – CO2 emissions will be 42.3 Gt”. Usually, those modellers of
206 THE SYNDROMES APPROACH TO SCALING
■
■
Global Change who actually give these kinds of statements qualify those by adding: “Don’t trust the numbers, just trust the trends”. But why should we do that? Models per se – though probably the only way to reflect the world’s complexity at all – do not guarantee the correct reflection of complexity. One important fact generating uncertainty is the scaling problem. The scientific knowledge about relevant processes of GC is usually on the level of their “natural scale” (defined by the scale of observation or the scale of the underlying “first principles”). Now the interactions of processes across different scales require up- or downscaling procedures which mostly transcend the scope of the original scientific knowledge about these processes (e.g., see the examples in Root and Schneider [1]). Any kind of political strategy against some non-sustainable development within global change brings about the risk of triggering a vast variety of extra effects, either wanted or unwanted. This situation is comparable with a patient attending a doctor. Prescribing a medication against one symptom might well induce another symptom, which itself has to be treated using another treatment. This cycle repeats itself, unless one is able to identify the underlying disease – or Syndrome – in its totality and to prescribe a general therapy against the disease itself.
These considerations illustrate that it is necessary to take into account Earth System complexity in an appropriate manner when dealing with GC [2, 3]. There are further aspects of Global Change which enhance the difficulties of modelling and analysing the current path of apparent non-sustainability. There is not only the “complexity-induced uncertainty” discussed above, but also the “modular” or “holistic/reductionistic uncertainty” about single issues and relations being part of the highly interconnected global network of interrelations. Knowledge of many relations constituting the overall complexity is vague, incomplete or only qualitatively available. It is not necessarily the case that these uncertainties can be resolved over time: social, cultural or political issues are qualitative in nature and one cannot expect that there will be, e.g., a quantification of the old Weberian relation between Protestantism and the ethics of capitalistic activity [4] – let alone that there will be a “proof” in the sense of mathematics or physics. At this point one might argue that complexity and non-quantifiability constitute natural constraints for any kind of modelling or formal analysis. The dilemma is that the mathematics and physics of complex systems tell us that we actually need some kind of formal analysis: due to the non-linearities in the system, counter-intuitive surprises can happen which can only be detected or anticipated by the use of advanced calculus. But it is not only this experience which tells us about the need for a formal, preferably model-based approach; it is also the dilemma between relevance and the importance of waiting for empirical evidence: anthropogenic climate change never would have been a subject for the public without any modelling exercises!
SCALING IN INTEGRATED ASSESSMENT 207
Some people therefore start “to quantify the non-quantifiable”, sometimes by methods like willingness-to-pay, i.e. approaches which are intrinsically consistent, but which ethically remain doubtful (e.g., when monetising human life). From our point of view this approach remains questionable as it neglects the advantages of qualitative research. To sum up, we state that the complexity of the Earth System requires a modelling approach [5] which is capable of incorporating this complexity in an appropriate manner by allowing to integrate incomplete, vague or qualitative knowledge into its formal framework. In this paper we will elaborate on the Syndromes approach which is an attempt to meet these aims. Idealistic Deduction versus Realistic Induction To explain the general systems-theoretical idea behind the Syndrome concept we will start with a (necessarily) hypothetical situation which would allow Syndrome identification in a deductive way. Let us assume a system of ordinary differential equations (ODE) which represents the whole global dynamical system, including all relevant aspects of the natural, social, economic and cultural spheres and their complex interactions. Here the spatial aspects are included by discretisation which means that the interaction between different scales is formulated explicitly. Insofar this hypothetic system of ODEs includes the correct methods of upscaling (e.g., as a simple case the summing up of CO2 fluxes from all the heterogeneous sources) and downscaling (e.g., the regionalisation of climate change to calculate its feedback on carbon sources). For a more complete review of scaling issues in the anthropospheric part of the Earth System see, e.g., Gibson et al. [6], while for the natural science side Root and Schneider [1] give further examples. Using any set of variables we would now expect a very large number of these variables and most of the equations of this system to be closely interlinked, leaving us with an intractable problem. One option to tackle such a complex system is to decompose its dynamics into several components that are approximately independent. One way to achieve this is to transform the variables of the system in such a way that the system decomposes into several only weakly interacting sub-systems. As an example from physics consider the well-known two body problem which separates completely in one sub-model for the relative motion of the masses and one (simple) submodel for the dynamics of the centre of mass. By introducing a small third mass the two sub-systems become weakly coupled [7]. In general, decoupling can be achieved by a canonical transformation. The resulting sub-models are denoted in our terminology as functional cause-effect patterns. However, since we are mainly interested in non-sustainable behaviours, we concentrate in the Syndrome concept on those functional patterns that exhibit at least one non-sustainable trajectory, the so-called Syndrome-prone or Hazardous Functional Patterns (HFPs). The class of non-sustainable trajectories resulting
208 THE SYNDROMES APPROACH TO SCALING
from one of these functional cause-effect-patterns is finally called “Syndrome” and we will see that it represents a sub- or Detailed Functional Pattern (DFP). As described above, these patterns are constructed so that their solutions decouple as much as possible. This is generally not feasible, so that there remains some degree of inter-pattern interaction which will be often found on the macro-scale (e.g., macro-economical relations, climate change caused by greenhouse gas emissions, etc). In the above (hypothetical) formulation of the Global System spatial interactions (and therefore also spatial interactions across different scales) are not distinguished from other, more functional forms of relations. This illustrates the equivalence of scaling and functional aspects in formulating the sub-systems according to the given criteria. The basic variables for the formulation of the functional patterns are called, again in analogy to medicine, the “Symptoms of Global Change” [8]. Their number should be much lower than the number of variables one expects for the hypothetical complete world model, so one important way to go from the hypothetical fundamental variables to the Symptoms is by aggregation, which has to be done in a way that the interactions of the aggregate Symptoms are (almost) sound aggregates of the underlying basic interactions. This aggregation rule, leading to a more coarse functional resolution in accordance with lower scale processes can be denoted as “functional scaling up”. Another criterion refers to time scale: variables which are slow compared to relevant time scales of GC (the latter being decades to centuries) can be interpreted approximately as constant boundary conditions and can be therefore omitted as dynamic variables or Symptoms, while very fast processes can be described by their equilibrium states as determined by the variables of the relevant time scale which reduces the number of dynamic relations (“adiabatic technique”, see e.g., Haken [9]). As mentioned above, a further important criterion for the selection of Symptoms is the goal of having not too many or too strong interlinkages between the Symptoms involved in one Hazardous Functional Pattern to those of another pattern. Most Symptoms are spatially resolved with a “natural” spatial scale as determined by the patterns of interactions they are involved in, which also makes Hazardous Functional Patterns and Syndromes local or regional entities. The whole process is summarised in Figure 10.1. Now there are several well-accepted reasons why the starting point for the above argumentation, the system of ODEs which represents the whole global dynamical system, is necessarily hypothetical: our knowledge of the relevant functional relationships is: ■ ■ ■ ■
uncertain incomplete, including the lack of up- and down-scaling rules partly of irreducible qualitative nature partly controversial.
Therefore, the strictly deductive way of identification of Hazardous Functional Patterns (HFPs) and their Syndromes is intractable but illustrates the general
SCALING IN INTEGRATED ASSESSMENT 209
Figure 10.1: Hypothesised process of the deduction of Hazardous Functional Patterns producing Syndromes as classes of non-sustainable time behaviours
idea and the concepts which can be maintained in a more inductive process. Syndrome identification has to start from: ■
■
the limited but presently available knowledge of quantitative or qualitative functional relationships with respect to Global Change the conditions of the validity of these interactions
210 THE SYNDROMES APPROACH TO SCALING
■
the knowledge of problematic environmental and socio-economic developments.
This knowledge is exemplified by the Bretherton diagram for the natural science part of Global Change research [10] and a diagram for socioeconomic drivers and consequences of land use changes on the top of Figure 10.2 [11]. Beside this (often large-scale) functional knowledge, detailed, small-scale knowledge from case studies (e.g., Kasperson et al. [12]) is available (bottom of Fig. 10.2). The functional resolution of HFPs and therefore the Syndromes (center of Fig. 10.2) lies in between these two extreme scales (“intermediate functional scale”). Thus one avoids to get lost in the details of an immense amount of different case studies and, on the other hand, to be too general to meet the necessary minimal differentiation (e.g., for at least weak forms of prognosis), especially at the civilisation – nature interface. While we describe in the remaining part of this section a more inductive approach to tackle this problem, a more formal iterative method based on qualitative differential equations is introduced in the next section (see also Petschel-Held and Lüdeke [13]). Now given the information base on functional knowledge, the first step is to define variables describing Global Change (“Symptoms”) according to the criteria defined above in the hypothetical deduction of the Syndrome concept from a complete Earth System model: they must help to decompose the complex global system in almost independent subsystems while the important interactions between the original variables must remain discernible. This implies choices about aggregation and “functional resolution”. A first list of about 80 of these variables or “Symptoms” was suggested by the WBGU [14] and developed further by the QUESTIONS project [15]. Then, the second step is to group the huge number of interactions between the Symptoms in functional patterns producing syndromatic behaviours (and possibly others). Here the spatial and functional conditions of the validity of interactions play an important role: a necessary condition that two particular interactions which have one Symptom in common (e.g., globalisation of markets causing agricultural intensification and agricultural intensification leading to loss of biodiversity) belong to one submodel is spatial coincidence. But this is not sufficient because further functional conditions may assign the interactions/ symptoms to, e.g., different economic sectors or groups of actors, which may coexist at one location assuming a realistic spatial resolution (e.g., poverty of different social groups in a city will have different effects on, e.g., migration). A list of 16 Syndromes is given in Table 10.1 which was suggested by the WBGU [14] and developed further by the QUESTIONS [15] project. The short descriptions given in the table reflect important aspects of the respective Hazardous Functional Pattern (HFP). Due to the limited knowledge base used Hazardous Functional Pattern (HFP). Due to the limited knowledge base used
SCALING IN INTEGRATED ASSESSMENT 211
Figure 10.2: Approaches to identification of Hazardous Functional Patterns (“intermediate functional scale”): general functional knowledge and problematic global developments (top-down) versus generalisation of detailed case studies (bottom-up).
for the identification of the functional patterns and the Syndromes they must be interpreted as educated first guesses which have to be corroborated in the usual process of verification/falsification/modification. Because Hazardous Functional Patterns are very abstract and deep causal concepts, they cannot be checked directly. Instead of this, results deduced from them (syndromatic or non-syndromatic ones) have to be compared with observed phenomena. Figure 10.3 gives one example for a Hazardous Functional Pattern. This pattern generates as one class of its possible behaviours the SAHEL SYNDROME [15].
212 THE SYNDROMES APPROACH TO SCALING
Table 10.1: Syndromes of global change a) Utilisation Syndromes SAHEL SYNDROME: Overcultivation of marginal land OVEREXPLOITATION SYNDROME: Overexploitation of natural ecosystems RURAL EXODUS SYNDROME: Environmental degradation through abandonment of traditional agricultural practices DUST BOWL SYNDROME: Non-sustainable agro-industrial use of soils and bodies of water KATANGA SYNDROME: Environmental degradation through depletion of nonrenewable resources MASS TOURISM SYNDROME: Development and destruction of nature for recreational ends SCORCHED EARTH SYNDROME: Environmental destruction through war and military action b) Development Syndromes ARAL SEA SYNDROME: Environmental damage of natural landscapes as a result of large-scale projects GREEN REVOLUTION SYNDROME: Environmental degradation through the introduction of inappropriate farming methods ASIAN TIGERS SYNDROME: Disregard for environmental standards in the course of rapid economic growth FAVELA SYNDROME: Environmental degradation through uncontrolled urban growth URBAN SPRAWL SYNDROME: Destruction of landscapes through planned expansion of urban infrastructures DISASTER SYNDROME: Singular anthropogenic environmental disasters with long-term impacts c) Sink Syndromes SMOKESTACK SYNDROME: Environmental degradation through large-scale diffusion of long-lived substances WASTE DUMPING SYNDROME: Environmental degradation through controlled and uncontrolled disposal of waste CONTAMINATED LAND SYNDROME: Local contamination of environmental assets at industrial locations
One first step of validation is the data-based Syndrome diagnosis. Here we calculate from the structure of any Hazardous Functional Pattern the so called Disposition towards the Syndrome, which means that the most important mechanisms and interactions potentially may become active in a specific region. One important aspect in the definition of this concept is time-scale. Disposition usually depends on natural and socio-economic characteristics which are assumed to change slowly in time compared with the typical time scales of the Syndrome. In general it will be necessary to describe the complex conditions for the potential validity of the main interactions by a relatively large set of hierarchically ordered indicators, which can be illustrated by a decision tree, showing the different hierarchical levels together with the logical relations between the basic indicators. An appropriate way to formalise this decision tree has to reflect the mostly qualitative nature of the Syndrome mechanism’s description which implies the use of qualitative knowledge in the identification of Syndrome prone regions too. Up to now the Fuzzy Logic concept [16] appeared to be most fruitful in this context [17].
SCALING IN INTEGRATED ASSESSMENT 213
Figure 10.3: Network of interrelations for the Sahel-Syndrome-generating functional pattern (Sahel HFP).
As an example, the disposition towards the SAHEL SYNDROME will be discussed here. In this case one has to identify conditions for the following central interactions: (a) poverty-driven low capital intensification and expansion of agriculture causes soil degradation and (b) yield decline forces the poor rural population to further land use changes due to the absence of economic alternatives. In the case of this Syndrome the most important interactions (the “Syndrome kernel”) operate on the same spatial scale, so there is no up- or downscaling problem included. Later we will discuss a extension of the model which allows to study the spatial interaction of dynamical structures resulting from the respective Hazardous Functional Pattern. Interaction (a) becomes probable if the considered region is fragile with respect to its natural conditions for agriculture (“natural dimension”), while interaction (b) becomes probable if there is a high proportion of subsistence farming in a primary sector oriented economy (“socio-economic dimension”). Here it is assumed that the temporal change in the natural as well as in the socio-economic dimension is slow compared with the time scale of the degradation-impoverishment-spiral. This seems generally valid for the natural component (orography, climate, natural soil fertility, etc), while for the socio-economic conditions (e.g., sectoral structure of the economy) change could in principle occur on time scales comparable with the time scale of the Sahel-HFP dynamics – but the situation in almost all developing countries shows a remarkable constancy in the dependence on smallholder agriculture including subsistence farming for significant parts of the population. Therefore the combination of a fragile resource basis and the lack of alternatives for livelihood is the fatal background for this syndrome dynamics.
214 THE SYNDROMES APPROACH TO SCALING
Figure 10.4: Structure of the algorithm for calculating the disposition towards the SAHEL Syndrome, using elements of qualitative and quantitative modelling.
In Figure 10.4 it is shown how these conditions are estimated on the basis of available global data sets and models. The latter include, e.g., for the natural dimension the net primary productivity of natural vegetation (NPP) as a basic input for general growth conditions (here as a modelled value considering the present climate), and the orography as an indicator for erosion risk. For the socio-economic dimension, data on the importance of the primary sector and market statistics for food products were used [18]. In the sense of a FuzzyLogic formalisation all linguistic categories indicated by rectangles in Figure 10.4 are characterised by membership indices between 0 (the category does not apply to the region at all) and 1 (the category applies definitely to the region). Accordingly, the circles depict appropriate fuzzy connections. The global result (half-degree spatial resolution) of the algorithm above described is shown in Figure 10.5, presented as the membership-index with respect to high SAHEL-SYNDROME Disposition. It can be seen that even very fragile regions in industrialised countries (e.g., the Western USA) are not prone to the Syndrome because of the missing socio-economic conditions, while, e.g., in the Sahel region, in other parts of West Africa, the North East of Brazil, the West coast of South America, Mongolia and the West of the Indian sub-continent, both the social and the natural dimension apply, which results in a high disposition. In those regions the Hazardous Functional Pattern could be active, so those regions are either endangered by the outbreak of the Syndrome or the Syndrome is already realized. Just to give an example on how these results of Syndrome diagnosis can be used in “classical” climate impact research, we show here the result of a sensivity
SCALING IN INTEGRATED ASSESSMENT 215
Figure 10.5: Disposition towards the SAHEL SYNDROME under the present climate (truth value for “disposition is high”).
Figure 10.6: Climate sensitivity of the disposition towards the SAHEL SYNDROME (1/).
study with respect to climate change [18]. In Figure 10.6 this sensitivity, calculated as the absolute value of the gradient of the SAHEL SYNDROME Disposition with respect to climate, is presented. Here one can identify which regions are endangered to become disposed towards the SAHEL SYNDROME under climate change. This calculation becomes possible because agricultural plant productivity, one important indicator contributing to the SAHEL SYNDROME Disposition, is based on climate driven models for water availability for irrigation and plant productivity (see Fig. 10.4). The next step in Syndrome diagnosis is the determination of the so-called Intensity. Here we identify in which regions of the world a particular Syndrome is presently active. The method – strict deduction from the qualitative Hazardous Functional Pattern – is closely related to the question of bottom-up
216 THE SYNDROMES APPROACH TO SCALING
identification of functional patterns (see the lower part in Fig. 10.2) and prognosis of Syndrome development. These methods are discussed in detail in the next section.
Syndromes II: Modelling, Spatial and Functional Scale of Validity The approach to formulate a Syndrome as presented in the previous section was purely intuitive. Though the assessment of the Disposition relates the basic features of a proposed Syndrome to data sets or models it is yet unclear whether the “over-cultivation of marginal land” (SAHEL-SYNDROME) actually is a solution of a functional pattern of the Earth System. And if yes, what does this functional pattern actually looks like. Now there is ample literature that relates observations on environmental degradation to this type of resource use, sometimes referred to as Impoverishment-Degradation-Spiral [19, 20]. There are reports on the general behaviour itself as well as on purely social or natural aspects. It is this multitude of observations which suggests that there is a common functional pattern bringing about the experiential types of degradation. The following questions have to be answered, though: ■
■
■
How can we specify a Hazardous Functional Pattern in more detail, i.e. in terms of the actual relations involved? (This addresses the question of functional “resolution” or scale.) How to determine the geographical locations of the occurrence of both, the pattern and the Syndrome? How can we verify that this pattern brings about a Syndrome?
The most we can expect from any scheme of “validation” is a non-falsification in the Popperian sense, which is due to the fact that we only can specify necessary conditions for a syndromes activity. The scheme to be used in this process has to take care of the high level of uncertainty of the Earth System’s processes. In the next section, we want to illustrate how these essential questions can be tackled by use of a new qualitative or semi-quantitative modelling approach. The major procedural features of this approach and its differences to conventional, quantitative approaches will be discussed in the subsequent section. There we use the SAHEL-SYNDROME again as a prototypical example how to apply this concept and what to learn from the results. Qualitative differential equations – formalizing coarse functional scales In this section we want to describe the general features of the mathematical tool underlying our methodology. We will use a simple example instead of giving detailed mathematical information which can be found in the respective literature [21]. The example we are going to use is taken from the field of theoretical ecology, in particular population dynamics [22], extended by a simple management component.
SCALING IN INTEGRATED ASSESSMENT 217
In quantitative terms, logistic growth for a population P is usually described by a differential equation of the form: G = dP/dt = ␣ P (PmP),
(1)
with a climax population Pm and a maximal growth rate Gm=(␣Pm2)/4 corresponding to a population P0. The growth rate exhibits an inverted U-shaped function in dependence of the population P, shown as the black line in Figure 10.7. If we would start with a small population P1, the growth law in Equation (1) would finally lead to the climax state Pm with the typical S-shaped logistic growth over time. This is a stable equilibrium, i.e. the system stays there forever. In a second step we introduce some external perturbation to the system in the form of a constant withdrawal E. Therefore the new growth rate is G’ = G – E. In Fig. 10.7, the resulting growth rate for three different values of E is shown: ■
■
■
E < Gm (dashed line): the stable equilibrium is shifted towards values of P smaller than Pm, i.e. Pm’
Gm (dot-dashed line): now the withdrawal is too large. No equilibrium exists, i.e. the species will become extinct.
The dynamical behaviour of the system depends on the actual values for the parameters ␣ and Pm, but it seems that the structure of three different behaviour classes is a general property of logistic growth. Therefore these properties should be obtained by a purely qualitative description as well. This would actually prove that the existence of three types of solutions is a general feature. The concept of qualitative differential equations and its implementation within the QSIM-package developed at the University of Texas at Austin, allows representing the logistic growth in a rather general way: In the first step, the relevant variables are represented by so-called landmarkvalues, i.e. values where some kind of qualitative cha nge in the relations between these specific variables and other system elements are assumed to take place. Taking the variable population from the example above, these values are 0, P0 and Pm with 0 < P0 < Pm. It is important to stress that for the qualitative differential equations an analysis it is not necessary to know the actual values of these landmark-values, but just about their existence and relative order. For the growth rate Gm the landmark-values are 0 and Gm > 0.
218 THE SYNDROMES APPROACH TO SCALING
Figure 10.7: Basic relation for the didactic model to explain the qualitative modelling approach If subject to a constant withdrawal E, the U-shaped relation between the population P and the growth rate G gives rise to three different types of behaviour. Starting from the climax state P = Pm, the population either stabilises at a level beyond P0, if the withdrawal is less than Gm (dashed line) or right at P0 if E = Gm (dotted line). In case of E > Gm, the population finally vanishes (dot-dashed line).
Its magnitude and its direction of change constitute the qualitative values of a variable. The magnitude is given either by a landmark-value or by an open interval between two adjacent landmark values. The direction of change is specified either as positive (encoded by ↑), steady (°) or negative (↓). In this way, a decreasing population between P0 and Pm would be written as ((P0, Pm), ↓). A specific qualitative state is then given by the combination of the qualitative values of all variables. Within the second step of formulating the qualitative model, the relations between the variables are specified in terms of constraints. In case of the logistic growth one can make use of the so-called U- constraint: ((U- P G (P0 Gm)) (0 0) (Pm 0)).
(2)
This means: for populations below P0 the growth rate G is a monotonously increasing function of P, for values of P above P0 it is a monotonously decreasing function. At P = P0 the value of G is equal to Gm. Furthermore, for P = 0 and P = Pm the growth rate is zero. This corresponds to a general formulation of the U-shaped relation sketched in Figure 10.7. The syntax used in (2) is the one also implemented in the QSIM-software package. By specifying all the relations in this way, one can easily use the package to obtain all the solutions compatible with these constraints, i.e. the usage and application of the QDE-concept is rather straightforward and does not require a lot of programming skill. It is important to note that the algorithm does not use any numbers, but is implemented by purely symbolic manipulation. A graphical representation of the results is given in Figure 10.8, which demonstrates that there are three different dynamics which are compatible with the qualitative constraints for the relations between the systems elements. This is
SCALING IN INTEGRATED ASSESSMENT 219
Figure 10.8: Qualitative behaviours of the simple didactic model for a general logistic growth of the population dynamics. Each rectangle describes one qualitative state of the system. The black arrows points to possible successor states. Branches of the behaviour tree end either in stable equilibrium states or in “transition states” where the trajectory leaves the definition space of the model. For detailed explanation and discussion see text.
in complete agreement with the expectations and the results from the quantitative exercise outlined above. Similarly to the quantitative exercise, there is one case where the population collapses and there are two stable states. However, this result is much more general than the previous information about the quantitative system, since there is much less information about the shape of the functions used. Table 10.2 summarizes the properties of the qualitative modeling approach by QDEs in comparison with conventional modeling by ordinary differential equations. With respect to the relation of QDEs and the respective classes of ODEs (including members which produce complex dynamics) it is possible to prove that all solutions of the ODEs are represented in the qualitative behavior tree generated by the QDE algorithm [21]. Complex ergodic systems result in arbitrary sequences of qualitative states as could be proven by Dordant [23]. What do we learn from this kind of qualitative modelling exercise? First of all, we learn that any specific U-shaped function with a top-sided vertex relating population P and its growth rate G brings about one of the three identified behaviors. It thus might be concluded that the observation of one behavior in Region 1 and of another behavior in Region 2 might well be due to the same qualitative properties of the mechanisms behind the observations. This addresses the issue of patterns of interactions and of regional similarities in terms of functional properties. Secondly, we learn from the structure that the event at time T1 (third column of states in Fig. 10.8) uniquely determines
220 THE SYNDROMES APPROACH TO SCALING
Table 10.2: Comparison of important features of conventional modelling with ordinary differential equations (left) and qualitative modelling (right) using QDEs Conventional Modelling
Qualitative Modelling by QDEs
Numbers on the real axis
■
Real-valued functions modelling the interrelations between the different variables System of differential equations Single solution explicit in time
Landmark values specifying distinct values where relations to other variables change qualitatively, e.g. Po (see below). Values to be taken by the variable: ■ landmarks and intervals in between together with the direction of change (↑, ↓, or °) Qualitative features only, e.g. A is monotonically increasing with B, A is "Ushaped" in B with Bo as turning point, etc. Corresponding number of qualitative "constraints" relating state variables and their changes. Entire tree of all possible solutions compatible with the constraints. Time as a qualitative variable, specified in terms of events of qualitative system changes.
the final outcome. For example, if at P = P0 the population is still decreasing, it is going to vanish in any case – assuming that the structure does not change and no external action is taken. If this dynamical property would describe a real system it might be called “non-sustainable dynamics” by the rather general property of irreversible system destruction. In such cases of specific systemic properties the normative aspect of identification of non-sustainable trajectories is less important compared with situations where pure external valuation is applied. So far, purely qualitative modelling has been described. However, if there is also some quantitative information available, it seems to be sensible to make use of it. Quantitative information can come in two different ways: ■
■
Quantitative upper and lower limits for some or all of the landmarks might be available. Some quantitative information about the functions appearing in the QDE, e.g., in the form of upper and lower envelopes might be at hand.
In our example, one possibility would be that intervals for the amount of harvesting and for the maximum growth rate are known. If, e.g., for all values compatible with those intervals the maximum growth rate Gm is larger than the amount of harvesting E (e.g., E = (12,14), Gm = (15,30)), then the population collapse behaviour can be ruled out. Additionally to the possibility of excluding some behaviours, interval analysis works also the other way round: Depending on the qualitative behaviour, the intervals can be refined. If, for instance, the population collapse behaviour is
SCALING IN INTEGRATED ASSESSMENT 221
possible, then under the condition that this behaviour appeared, one knows that the lower bound of E must be larger or equal to the lower bound of Gm. The numerical difficulty increases, of course, drastically if we change from topologic time to metric time, i.e. if one wants to know something about the quantitative meaning of the stages in the dynamics. Apart from very direct interval arithmetic approximations that can yield crude estimates based on the mean value theorem, there are several different methods to tackle this problem which is very similar to the deduction of the reachable set of a differential inclusion ([24]; for applications within Global Change research see Tóth et al. [25], Petschel-Held et al. [26] and Bruckner [27]). This is an area of intensive current research, where we test a Hamilton-Jacobi-type method [28] and a level-set approach. General hazardous functional patterns and detailed local case studies In its original version, the SAHEL-SYNDROME was designed to describe the situation for pure subsistence agriculture on marginal sites [8, 15, 17, 18]. The smallholder agriculturalists to be described by the mechanism (Fig. 10.9) do not have any alternative means of income and are thus enforced to use and finally overuse the marginal natural resources of their environment. This includes pasturing, farming, collection of firewood, etc. Due to the lack of alternatives the smallholders intensify their agricultural activity in case of a reduced agricultural yield, i.e. increased poverty (line 2 in Fig. 10.9). However, these statements do not describe mechanisms, but solely outline observed developments over time. From our point of view, mechanisms are represented by more general statements on relations between variables. In case of poverty and intensification such a relation might have the form: the higher poverty is the higher is intensification1. The reason why one would like to do so is obvious: if we use the generalised mechanism we do not only have information on what will happen if poverty is increasing, but also what occurs if it is decreasing! This will play an important role when assessing the different dynamical behaviours within a functional pattern. The important point here is that we do not exactly know how this relation between poverty and intensification of agriculture looks like, or even: we do not claim that this relation is quantitatively the same in different regions. Of course, such a potential difference holds also for the other relations, e.g., the increased loss of soil quality due to increased agricultural activities (on marginal sites). In the latter case, the idea of regional “difference in similarity” can be illustrated as follows. The “geographer’s argument” states that every two regions are different concerning their specific form of human-nature interactions. Does this statement actually
1
Here we neglect the fact that too high levels of poverty are actually related to a decrease of intensification due to a loss of labour force and capital, e.g., seeds, stock, etc.
222 THE SYNDROMES APPROACH TO SCALING
Figure 10.9: Core mechanism of the original version of the SAHEL-SYNDROME. The symbols attached to the connecting lines uniquely encode qualitative relations as used within the concept of qualitative differential equations (QDEs). Their meaning is explained in the Appendix.
mean that no two regions share any common features? Certainly not, as otherwise any attempt to understand human use of natural resources would have to start all over again for each newly investigated region, and rather general theoretical claims concerning, e.g., the relation between the length of the fallow period and soil fertility would not be applicable. We thus assume that the geographer’s argument might well be true if applied to “quantities”, but that it not necessarily applies to qualities. In other words, the relationship between “fallow period” and “loss of fertility”, measured, say, in nitrogen loss in kg/year, might be quadratic in one region and logarithmic in another. Yet it is monotonously increasing in both! In this sense both regions belong to the same class: they exhibit a monotonously increasing relation between “fallow period” and “loss of fertility”. This idea of class identification can be extended by abstracting the rather specific variable “fallow period” to the more general notion of intensity of agriculture. Yet, this generalised, abstracted variable comprises not only the issue of fallowing, but also, e.g., of life stock density, fertiliser input, ploughing, etc. Analogously, one can use the notion of soil degradation as an abstraction of “loss of fertility”. These abstractions have to be order preserving, i.e. two regions with a certain order of fertility loss, say region 1 has a higher loss than region 2, resume this order within the abstracted variable2. Thus all the regions belonging to the class with a monotonous increase between fallow period and loss of fertility also belong to the class with the same type of relation between intensity of agriculture and rate of soil degradation. Yet this 2
In case of more contributions to soil degradation one might use soil degradation as a (weighted) aggregate of the various aspects. Then the mapping from the weighted aggregate to the abstracted variable has to preserve the order.
SCALING IN INTEGRATED ASSESSMENT 223
class contains also regions where a monotonously increasing relation between, say, goat stock density and soil compaction is valid. Line 1 in Figure 10.9 exactly encodes this type of relation which formally can be treated in this generality within the concept of qualitative differential equations (QDEs).
Figure 10.10: Scheme of generalisation used to formulate a class of civilisation-nature interactions. Case studies might be used to specify the regionally valid relations between relevant variables. Abstracting these variables into more general concepts, e.g., Intensity or Soil Degradation, are then used to specify qualitative relations between the abstracted variables. In the example given the qualitative relations, indicated by the notions U- and M+ in the cause-effect scheme on the lower right hand side, contains the (hypothetical) situation in Chad as well as in Malaysia and possibly Peru.
This idea of generalisation and class formation, which is summarised in Figure 10.10, lies behind the formalisation of a Hazardous Functional Pattern. In contrast to previous interpretations [29], the network of qualitative, general relations depicted in Figure 10.9 does not directly represent a Syndrome3, but rather a model of a Hazardous Functional Pattern, which might bring about a non-sustainable development. As such, this specification is completely legitimate. The question is whether one can formulate a set of qualitative models, and thus classes, which are: ■
■
3
detailed enough to include important details of the processes involved, but which are general enough to incorporate all the important aspects of sustainable development into a limited set of models.
Formally, a qualitative differential equation represents a class of ordinary differential equations.
224 THE SYNDROMES APPROACH TO SCALING
Both questions are related to each other by the issue of validation: can we find enough regions in the world which belong to this class. The direct proof – we know all the mechanisms of a region in sufficient detail to conclude whether it is a member of the class or not – will be exceptional. Therefore an indirect approach is chosen whose scheme is sketched in Figure 10.11. On the one hand the formal analysis within the QDE-concept allows to specify all qualitative time behaviours of the variables which are compatible with the functional pattern (step 4 in Fig. 10.11). On the other hand, there are countless observations – quantitative and qualitative. The latter might comprise, statements like “the landslide frequency has increased since the 1950s, but declined in recent years”. Thus, if an observation is reconstructed by at least one of the model behaviours (step 5 in Fig. 10.11), the actual mechanisms in the region considered are free of contradiction with the pattern described by the model. We might say that the applicability of the pattern and its mechanisms for this region is not invalidated. If this can be shown for enough regions, we might well claim that the pattern of mechanisms is globally relevant.
Figure 10.11: General scheme of case study integration into a common class of causes and effects.
SCALING IN INTEGRATED ASSESSMENT 225
A validation in this sense was performed in Petschel-Held et al. [30] where we could show that a functional pattern similar to the Sahel-HFP was able to reproduce the main qualitative observations of (almost) all case studies from the DFG-Programme “Environmental Perception and Coping Strategies in Endangered Ecosystems of the Developing World”. This iterative procedure of formulation and validation of functional patterns is structurally very similar to the concept of “strategic cyclical scaling” (SCS) as formulated by Root and Schneider [1] (see also chapter 9 of this volume). They propose a continuous cycling between large-scale studies (dealing with correlation of macro-variables) and small-scale studies (dealing with the investigation of mechanisms) to obtain at least a macro-theory based on sound causal relationships instead of statistical coincidence (which is the condition for any prognostic ability). In our procedure of case study integration the functional large-scale or marco-level is the general Hazardous Functional Pattern consisting of aggregated state variables (Symptoms) and their very generally characterised interactions. Switching iteratively between “large-scale-studies” (i.e. the construction and mathematical evaluation of the actual HFP hypothesis) and “small-scalestudies” (i.e. the systematic interpretation of different aspects of local case studies resulting in corrections of the large scale hypothesis) yields at least for the given scientific knowledge a consistent functional “macro-pattern”. The hypothesization of a HFP by carefully interpreting detailed case studies is sometimes also referred to as process tracing which is now of increasing relevance within the political sciences [31, 32]. Note that we do not identify the derived functional pattern as a Syndrome per se. The difference is that a Syndrome is understood as a clinical picture of civilisation-nature interaction, whereas the qualitatively defined patterns of interactions are assumed to have more general validity. We now demonstrate, how this more general formulation of patterns can bring about a Syndrome. This is strongly related to the question how a Syndrome is actually engendered. Example: Time Behaviour of the Local Sahel HFP. The cause-effect scheme of Figure 10.9 already contains most of the infor-mation needed by QSIM for a formal analysis. The qualitative multiplication between agricultural intensity and quality of soils to obtain the yield simply states: ■
■
If one of the two factors is zero then the qualitative product (yield) is equal to zero, and the directions of change are analysed according to the product rule of differential calculus, i.e. (uv)’ = u’v + uv’.
Yet some more information is added to the scheme. In particular, we specify two landmark values, “maximal sustainable” (ms) and “existential” (ex) for the intensity of agriculture and poverty, respectively. For the intensity we assume that for values less than ms soils can regenerate, whereas for values
226 THE SYNDROMES APPROACH TO SCALING
larger than ms soil degradation is taking place. Similarly for poverty: if it is below ex no intensification of agriculture is performed. This takes place for poverty in excess of the existential level only. Figure 10.12 depicts the qualitative time behaviours of the relevant variables within the core mechanism of the Sahel-HFP as displayed in Figure 10.9. We have chosen as the initial condition an environmentally positive, i.e. increasing soil quality, but socially stressed situation, i.e. existential poverty. This stress has not yet led to a massive increase in agricultural intensity, i.e. intensity is below its maximal sustainable level. This situation corresponds to the case where a change in the terms of trade, population growth, social marginalisation, etc. have induced high levels of poverty.
Figure 10.12: “Behaviour tree” for the original Sahel HFP of causes and effects Boxes and arrows indicate the qualitative states as explained by the legend on the right hand side. Time runs from left to right. Note that in some cases more than one successor is possible, e.g., for the initial state on the QSIM identifies seven possible successor states from the model.
The behaviour tree in Figure 10.12 represents a restricted projection of possible evolutions within the Sahel HFP of interaction mechanisms between humankind and nature. It can be seen that there exist basically four classes of possible outcomes of the time evolution of this functional pattern. An outcome is defined as a final state in the model and is realised either as a fixed point (or quiescent state as it is
SCALING IN INTEGRATED ASSESSMENT 227
called within QSIM) or as a transition state where one or more variables leave the domain for which the model is valid. The latter is, e.g., true for the states indicated as Resource Focused in Figure 10.12, where the quality of soil reaches its “natural” level and is still increasing: the model does not give any specification what is going to happen afterwards. These outcomes can only be expected in case of rather productive soils with a rapid regeneration rate. The other two types of transition states are described as Acceptable and Catastrophic. In the first case agriculture is on a low level or abandoned and soils can regenerate because the income is still large enough, i.e. poverty remains below the existential level. Again this is due to productive places, but it might also be realised using highly efficient and soil preserving agricultural techniques. Formally this corresponds to a value of ms high enough not to be reached within the simulation4. The outcome characterised as catastrophic and the dynamic behaviour leading to it actually represent what is understood as the SAHEL-SYNDROME: existential poverty leads to a lasting intensification which strongly damages thenatural resources. Due to this damage there is no chance to increase the income, i.e. reduce the poverty, sufficiently. The cycle starts all over again … Taking the catastrophic outcomes as the Syndrome as such, we can assess the question how it is engendered. If we look at the two intermediate states (shaded within the tree), we observe that the Syndrome evolves from the (neutral) initial state if the intensity of agriculture reaches the landmark value ms, before poverty is reduced below its existential level. Though this can happen purely due the increase of the intensity, it might well be enforced by natural events like droughts or floods, which lower the actual value of ms: Agricultural activities being sustainable before the drought might be damaging to the natural resources in case of the extreme event. This rather detailed discussion of the model results of the simple Sahel HFP should illustrate the type of results produced by a qualitative model as well as its applicability. The extended Sahel HFP and spatial distribution of time behaviours With the introduction of the HFP concept some new forms of spatial aspects have to be considered compared to traditional modelling. Let us first assume that the HFP consists only of local interactions between the contributing trends (as in the example given in the preceding paragraph) and that it is separated from further HFPs (which must not be the case: the Favela-HFP may, e.g., interact closely with the Sahel-HFP via migration). Now there may be a large region where the general HFP is valid, but in different sub-regions different trajectories (branches of the behaviour tree) may be realized. We 4
This is a qualitative argument, though. The simulation is purely symbolic and does not assume any numbers, neither for the variables nor for the landmarks. It just assumes their existence and constancy.
228 THE SYNDROMES APPROACH TO SCALING
will call the subclass of the quantitative differential equations (with respect to the QDE) which produce a particular qualitative behaviour “Detailed Functional Pattern” (DFP). It is important to note that a DFP is only specified by its behaviour and cannot be described in terms of a more detailed qualitative cause-effect scheme (at least for the time being; methods proceed). This heterogeneous time behaviour will occur when the general mechanism is valid all over the region but the detailed realizations of the symptoms and interactions differ significantly (due to different natural conditions, cultural or technological particularities, etc.). This may, e.g., lead to different outcomes of the race between yield enhancement and soil degradation by intensification measures. Therefore, the appropriate spatial scale of observation is that of a single DFP, otherwise syndromatic trajectories may be masked by adjacent regions which perform acceptable branches of the same HFP. In a preliminary study we investigated the behaviour of the simple Sahel HFP when a non-local interaction is introduced. Here we chose the land-use ◊ regional climate interaction via change in albedo and evapotranspiration – an effect which is discussed controversially in the literature as a reason for the acceleration of desertification processes (e.g., Voortman [33] and Le Houérou [34]). To include this hypothesis in the Sahel HFP, we assume that the state of resource degradation in all subregions determine the change of the common regional climate which then influences the yield by decreased rainfall etc. Here we can use the adiabatic approximation due to the fast adaptation of regional climate to the surface properties compared with the long time scale of, e.g., soil degradation processes. This results in a coupling by a simple qualitative function relating resource degradation with regional climate. The coupling is formulated for two regions by introducing an additional “macro”-variable, “Regional Climate”, which is an increasing function of the resource quality (here represented by “Quality of Soils”) in both regions. This macro-variable feeds back on both local yields which now depend on the local resource quality, the local agricultural activity and the regional climate influence. The extended functional pattern is displayed in Figure 10.13 (for the explicit mathematical definition of the different relations see the Appendix).
Figure 10.13: Enhancement of the simple Sahel-HFP introducing two sub-regions coupled by a non-local landcover-climate interaction (for the definition of relations see Appendix).
SCALING IN INTEGRATED ASSESSMENT 229
We want to remark one particularity of the introduced non-local interaction: In case of two similar sub-regions (identical DFPs with respect to the local model discussed in the previous section), identical initial conditions andsymmetrical non-local interaction, the resulting time behaviour for both regions is identical and can be described with the local model. This is possible because in this case the climate interaction can be assumed as integrated in the monotonic relation between resource quality and yield (see Fig. 10.13). Therefore the proposed enhancement of the pattern by a nonlocal interaction is the structurally most moderate step. This is in contrast to enhancements implying new relations between the variables which are not already represented in the local dynamics. Evaluating the functional pattern in Figure 10.13 using the QSIM-algorithm yields the behavior-tree for one of the two sub-regions (assuming appropriate behavior of the second region) as displayed in Figure 10.14. Due to the qualitative structure of the non-local interaction the result for one sub-region would not change for an arbitrary number of coupled sub-regions. To keep
Figure 10.14: Resulting behavior tree for the enhanced Sahel HFP (two sub-regions coupled by a non-local landcover-climate interaction) for one of the sub-regions. For a detailed legend see Figure 10.12.
the result more transparent we omitted the “resource focussed” states (see Fig. 10.12) which are somewhat unrealistic and observe that:
230 THE SYNDROMES APPROACH TO SCALING
■
■ ■
all trajectories which are possible in the case of only local interactions (see Fig. 10.12) can still be realized in each region, some new (cyclic) behaviours can occur, therefore “secure” states in the case of the local HFP (i.e. non-bifurcating trajectory ending in an acceptable outcome) may become insecure (shifting to the nonsustainable paths) in the case of strong resource degradation in the adjacent region.
So we can conclude that in case of increasing evidence of the landuse/ regional climate interaction as a relevant aspect of the degradationimpoverishment mechanisms the enhancement of the simple local Sahel HFP with respect to non-local effects is necessary because otherwise: ■
■
the identification of a region as governed by the pattern will fail in several cases, and misleading conclusions about further possible qualitative developments might be drawn resulting in wrong policy advice.
The model enhancement explained above gives an example for one further iteration in the development of sound HFPs as elements of understanding GC on an intermediate functional scale.
Concluding Remarks In section 2 a hypothetical, “idealistic deduction” of the concepts of the Syndrome approach was performed (Fig. 10.1) in order to illustrate how the aspects of spatial and temporal scale are closely related to the detail of functional description (functional scale) of Global Change. The decomposition of the complex Earth System into Hazardous Functional Patterns takes these aspects into account from the beginning (formal methods as canonical transformation and adiabatic approximation were mentioned). In a next step a more inductive method to obtain HFPs on an intermediate functional scale is introduced and a set of 16 Syndromes (Table 10.1) as nonsustainable time developments of the HFPs is given. As an example for the application of the adiabatic approximation to extract the dynamic properties relevant for GC the Disposition concept is explained and applied to the SAHEL SYNDROME (Figs. 10.4 & 10.5). In section 3 we give a more formal method how these HFPs can be obtained from a large number of detailed case studies (Fig. 10.11). An iterative procedure of case study generalisation which is structurally similar to the Strategic Cyclic Scaling approach [1] is introduced. A central concept in this systematised pro-cedure to obtain HFPs are qualitative differential equations (QDEs): They allow to subsume different forms of interactions as observed in different case studies under classes of relations (characterised by general properties like monotony). As “didactic” example for the application
SCALING IN INTEGRATED ASSESSMENT 231
of this concept a simple population dynamics under constant yield is discussed (Figs. 10.7 & 10.8) and a systematic comparison between usual modelling concepts and the QDE concept is given – including the “costs” of modelling on a more coarse functional scale in terms of loss of detail in prognosis (Table 10.2). Then two versions of the Sahel-HFP were elaborated: ■
■
A first version based on local interactions only (Fig. 10.9), illustrating in detail the aspect of functional scaling in the Syndrome concept (Fig. 10.10). On the level of the obtained intermediate functional scale sus-tainable and non-sustainable qualitative trajectories could be identified (Fig. 10.12, for a detailed discussion of policy option development based on this results see Petschel et al. [15, 30]). Additionally we could formulate rules for the identification of appropriate scales of observation which are defined by the spatial extent of Detailed Functional Patterns (DFPs) which characterise the conditions for the validity of a single trajectory of the HFP. An enhanced version of (i) which considers an additional non-local interaction. Here we included an interaction between land-cover and regional climate (Fig. 10.13) which is discussed controversially as one reason for desertification processes. As a result we obtained that all trajectories which were possible in the case of only local interactions still could be realised in each sub-region but that some new (cyclic) behaviours could occur. Therefore “secure” states in the case of the local HFP could become insecure. This enhancement is one iteration in the general scheme of integration displayed in Figure 10.11 – new evidence from local and regional case studies (step 1) suggests the relevance of the land-cover/local climate interaction based on observations of relevant variables (step 2). The HFP is modified according to the new functional hypothesis (step 3) and evaluated with respect to all compatible dynamic behaviours (step 4) which have then to be compared with the available observations from all case studies (step 5).
The results of the enhanced model suggest a further facet of the Syndromes Approach to Scaling besides the functional scaling/generalisation aspect already discussed: the approach might be used to determine a hierarchy of non-local interactions with respect to their influence on the dynamics of the local sub-models: ■
■
■
the preservation of the “local” trajectories without new behaviours (as in the given example for particular conditions) the preservation of the “local” trajectories and appearance of new behaviours (as in the given example in general) the generation of a completely new behaviour tree
In the first case the added non-local interaction produces no new dynamic behaviours – the local analysis is sufficient. In the second case two different versions have to be considered. The new behaviours may mix up former,
232 THE SYNDROMES APPROACH TO SCALING
local sustainable and non-sustainable trajectories (as in the example) or not. In the latter case parts of the local analysis can be maintained – otherwise the application of the local analysis leads to severe misinterpretations with respect to sustainability questions. In the third case a totally new analysis is necessary. This classification of additional complexity induced by non-local interactions shows that the concept of qualitative differential equations in the framework of HFP generation may also contribute to this particular question of scaling.
Appendix A: Important Terms of the Syndrome Concept Symptoms: spatial/functional aggregates of detailed variables describing Global Change which allow for systematization of their relations Hazardous Functional Patterns (HFPs): sub-systems (Symptoms and their functional relations) of the global system producing non-sustainable trajectories (among others) Syndromes: typical non-sustainable trajectories /development paths of sub-systems of the global system (HFPs) Detailed Functional Patterns (DFPs): concretization of a HFP, producing exclusively a class of non-sustainable trajectories (Syndrome) Disposition towards a Syndrome: degree to which the conditions for the syndrome’s most important mechanisms and interactions are fulfilled Functional Scale: detail of functional description (degree of consideration of detailed mechanisms and related variables) – related to spatio-temporal scale, but not identical
Appendix B: Symbols used for the Graphical Representation of Qualitative Models In order to have an intuitively simple to understand way to describe qualitative models, we introduce a few special symbols to denote the functional relationships in a qualitative model. qdir stands here for the qualitative direction of a variable, i.e. increasing/steady/decreasing/unknown and qmag denotes its qualitative magnitude, i.e. its state relative to qualitatively important landmark values (e.g., 0).
SCALING IN INTEGRATED ASSESSMENT 233
This encodes a qualitative addition of B and C to yield A. A qualitative addition is specified, e.g., by the following properties: ■
■ ■
The directions of change are added, i.e. if qdir(C) > 0 and qdir(B) > 0 so is qdir(A); yet if qdir(C > 0 and qdir(B) < 0 then qdir(A) can be either positive, negative, or zero. If qmag(B) = 0 and qmag(C) = 0 so is qmag(A) = 0 If qmag(B) = 0 and qmag(C) ≠ 0 then qmag(A) = qmag(C).
There is no qualitative subtraction. Subtraction, i.e. A = B C is expressed as a qualitative addition, i.e. A + C = B.
This encodes a qualitative multiplication of B and C to yield A, i.e.: ■
The directions of change combine according to the chain rule of differ-ential calculus, i.e. qdir(A )= qmag(B) . qdir(C) + qmag(C) . qdir(B)
■
If qmag(B) = 0 or qmag(C) = 0 so is qmag(A) = 0.
B is a monotonic function of A, i.e. among others, if A increase then also B increases. This corresponds to the condition: ∂B >0 ∂A In case of a bulleted connection line instead of the arrow we have a negative sign for the partial derivative.
234 THE SYNDROMES APPROACH TO SCALING
This is a multivariate-constraint corresponding to the relations ∂A ∂A >0, <0. ∂B ∂C
Here, a bullet always indicates a negative partial derivative, whereas the arrow-like symbol encodes a positive partial derivative.
The constraint does not relate to the state variable A itself, but rather to its rate of change, i.e. dA/dt.
References 1. 2.
3. 4.
5. 6. 7.
8.
Root, T. L., and S. H. Schneider, 1995. Ecology and Climate: Research Strategies and Implications. Science 269: 334–341. Kates, R. W., and W. C. Clark (eds.), 1999. Our Common Journey. Board on Sustainable Development – Policy Devision – National Research Council. Washington, D. C, National Academy Press. Rotmans, J., 1998. Methods for IA: The challenges and opportunities ahead. Environmental Modeling and Asssessment 3: 155–179. Weber, M., 1904/1930. The Protestant Ethic and the Spirit of Capitalism. Translated by Talcott Parson. New York, Charles Scribner’s Sons. Schellnhuber, H. J., 1999. ‘Earth system’ analysis and the second Copernican revolution. Nature 402: C19–C23. Gibson, C., E. Ostrom, and T-K. Ahn, 1998. Scaling issues in the social sciences. IHDP-Working Paper No.1, 85pp. Berry, M. V., 1978. Regular and irregular motion. in: S. Jona (ed.). Topics in non-linear dynamics. AIP Conference Proceedings 46, La Jolla. Schellnhuber, H. J., A. Block, M. Cassel-Gintz, J. Kropp, G. Lammel, W. Lass, R. Lienenkamp, C. Loose, M. K. B. Lüdeke, O. Moldenhauer, G. Petschel-Held, M. Plöchl, and F. Reusswig, 1997. Syndromes of Global Change, GAIA 6, Nr. 1.
SCALING IN INTEGRATED ASSESSMENT 235
9. 10.
11. 12. 13.
14. 15.
16. 17.
18.
19. 20. 21. 22. 23. 24. 25.
Haken, H., 1977. Synergetics. Berlin Heidelberg New York, Springer-Verlag. CIESIN, 1992. Pathways of Understanding: The Interactions of Humanity and Global Environmental Change. Consortium for International Earth Science Information Network (CIESIN). LUCC, 1995. Land-Use and Land-Cover Change. Science Plan. IGBP-Report No. 35, Stockholm. Kasperson, J. X., R. E. Kasperson, B. L. Turner II (eds.), 1995. Regions at Risk. Tokyo, United Nations University Press. Petschel-Held, G., and M. K. B. Lüdeke, 2001. Integration of Case Studies on Global Change by Means of Artificial Intelligence. Integrated Assessment 2: 123–138. WBGU – German Advisory Council on Global Change, 1997. World in Transition: The Research Challenge, Springer, Berlin. Petschel-Held, G., A. Block, M. Cassel-Gintz, J. Kropp, M. K. B. Lüdeke, O. Moldenhauer, F. Reusswig, and H-J. Schellnhuber, 1999a. Syndromes of Global Change. A qualitative modelling approach to assist global environmental management. Environmental Modeling and Asssessment 4, Nr. 4: 315–326 Zimmermann, H. J., 1991. Fuzzy set theory and its applications. 2nd revised edition, Boston: Kluwer Academic Publishers. Cassel-Gintz, M. A., M. K. B. Lüdeke, G. Petschel-Held, F. Reusswig, M. Plöchl, and G. Lammel, 1997. “Fuzzy-Logic Based Global Assessment on the Marginality of Agricultural Land Use.” Climate Research, 8: 135–150. Lüdeke, M. K. B., O. Moldenhauer, and G. Petschel-Held, 1999. “Rural poverty driven soil degradation under climate change: the sensitivity of disposition towards the SAHEL SYNDROME with respect to climate.” Environmental Modeling and Asssessment 4, Nr 4: 295–314 Kates, R. W., and V. Haarman, 1992. “Where the Poor Live: Are the Assumptions Correct?” Environment, vol. 34: 4–11, 25–28. Blaikie, P., and H. Brookfield, 1987. Land Degradation and Society. London, New York: Methuen. Kuipers, B., 1994. Qualitative Reasoning: Modeling and Simulation with incomplete Knowledge. Cambridge: MIT Press. Wissel, C., 1989. Theoretische Ökologie, Berlin, Heidelberg, New York: Springer Verlag. Dordan, O., 1992. “Mathematical problems arising in qualitative simulation of a differential equation.” Artificial Intelligence, 55: 61–86. Aubin, J.-P., and A. Cellina, 1984. Differential Inclusions. Berlin: Springer. Tóth, F. L., G. Petschel-Held, and Th. Bruckner, 1998. Climate Change and Integrated Assessment: The Tolerable Windows Approach. In: J. Hacker (ed.). Proceedings of the EU-Advanced Study Course
236 THE SYNDROMES APPROACH TO SCALING
26.
27.
28.
29.
30.
31. 32. 33.
34.
on Goals and Instruments for the Achievement of Global Warming Mitigation in Europe. Dordrecht: Kluwer: 55–77. Petschel-Held, G., H.-J. Schellnhuber, Th. Bruckner, K. Hasselmann, and F. L. Tóth, 1999b. “The Tolerable Windows Approach: Theoretical and Methodological Foundations.” Climatic Change, 41: 303–331. Bruckner, Th., G. Petschel-Held, F. L. Tóth, H.-M. Füssel, C. Helm, and M. Leimbach, 1999. “Climate Change Decision-Support and the Tolerable Windows Approach.” Environmental Modelling and Assessment, 4: 217–234. Moldenhauer, O., Th. Bruckner, and G. Petschel-Held, 1999. The use of semi-qualitative reasoning and probability distributions in assessing possible behaviors of a socio-economic system In: M. Mohammadian (ed.), Conference Proceedings of Computational Intelligence for Modelling, Control and Automation (CIMCA) 99, London: IOS Press: 410–416. WBGU – German Advisory Council on Global Change, 1995. World in Transition: The Thread to Soils. Bonn: Economica Verlag GmbH. Petschel-Held, G., M. K. B. Lüdeke, and F. Reusswig, 1999c. Actors, Structures and Environment. A Comparative and Transdisciplinary View on Regional Case Studies of Global Environmental Change. In: B. Lohnert and H. Geist. Coping with Changing Environments. London: Ashgate: 255–291. Homer-Dixon, Th., 1999. Environment, Scarcity and Violence. Princeton: Princeton University Press. George, A., and A. Bennett, 2000. Case Studies and Theory Development. Boston: MIT Press. Voortman, R. L., 1998. “Recent Historical Climate Change and its Effect on Land Use in the Eastern Part of West Africa.” Physics and Chemistry of the Earth, 23(4): 385–391. Le Houérou, H. N., 1996. “Review: Climate change, drought and desertification.” Journal of Arid Environments, 34: 133–18.
11 Polycentric Integrated Assessment CLAUDIA PAHL-WOSTL Interdisciplinary Institute for Environmental Systems Science, University of Osnabrück, Germany
Abstract
Transitions towards sustainability will require major changes in today’s socioeconomic systems. Such changes cannot be brought about by con-ventional policy measures. We advocate a new approach of a polycentric understanding of policy making that invokes instances of social learning at different levels of societal organization. The notion of polycentric involves the integration of different levels of human choice and geographical domains. The spatial component involves the sequence local – regional – national – global. It involves the combination of different types of human choice at different levels of societal organization (e.g., legal regulations, taxes, sub-sidies, local initiatives). Trying to understand what is the impact of dealing with diverse “global change phenomena” at diverse levels of organization will require new approaches to deal with human agency. Agent based modeling and its application in participatory settings is a novel promising approach to deal with such choice problems [1]. The importance of such scaling issues are explored for the problem of climate change and water resource management. Whereas water issues have primarily been approached from a regional, even local perspective, the climate problem has been addressed in the first place at the global scale with a global scientific and policy process (IPCC, Kyoto protocol). Regarding climate change one has increasingly recognized the importance of addressing the topic at the regional scale. Most choices will be made at the regional scale and will invoke short-term decisions that are not directly related to climate change. Regarding water resource management patterns of regional water scarcity may be compensated by complementary patterns of food trade leading to major transfers of virtual water at the global scale. In both cases the coupling of different scales in space, organization and time poses major challenges for integrated assessment.
238 POLYCENTRIC INTEGRATED ASSESSMENT
Introduction Integrated assessment (IA) may be defined as the scientific discipline that integrates knowledge and makes it available for decision processes. IA gained considerable international visibility with its activities in the field of climate change. First approaches relied more or less on models as means for integration. The decision process was perceived as utility maximizing choice of a single decision maker(s). Measures taken into consideration were mainly of the centralized kind, like taxes. However, IA has made considerable progress in recent years. The issues tackled have broadened to encompass environmental problems and global change at large and new methodological challenges emerged (e.g., Rotmans [2], Rotmans and Dowlatabadi [3]). Given the fact that most global change phenomena result form the added effect of numerous activities at regional scales (e.g., Morgan and Dowlatabadi [4]), IA faces major challenges that will be summarized here by the notion of polycentric integrated assessment. Polycentric refers on one hand to the need to consider different levels of societal organization and different types of social groups and measures. A modern understanding of governance can be based on the idea of actor and policy networks that are located between state, hierarchy and market (e.g., Pappi [5], Bressers et al. [6]). Also less organized groups such as citizens play an increasingly important role that has to be taken into account in designing IA processes. On the other hand polycentric refers to the fact that IA has to take into account a range of scales in space and time. As we will see the isolation of a single scale in space and time is hardly meaningful when dealing with complex environmental problems. The paper discusses some conceptual and methodological issues related to implementing a polycentric approach to integrated assessment. This is illustrated in a number of problem domains.
The Decision Perspective By its definition IA has to build on some well-defined perception of decision processes. It makes a major difference if decision making is perceived as maximizing a single objective goal function or as a process of negotiation among a set of actors with diverging subjective interests. Many IA analyses were based on the concept of the rational actor and the institution of a market. The rational actor paradigm is explained very briefly. More detailed explanations can be found in any textbook on micro-economics or decisiontheory (e.g., Kreps [7]). A rational actor is an omniscient individual who has the total knowledge about all his possible actions, their outcomes and their utility given different states of the world. Hence, he can always make the optimal decision maximizing his individual utility. His life happens in a market environment.
SCALING IN INTEGRATED ASSESSMENT 239
However, regarding environmental issues three major problems arise in market economies, due to their limited scope in space and time: ■
■
■
The environment is outside of the boundaries of the free market since most environmental services have no price and are thus not visible to the market. Regarding time scales, the needs of future generations are outside of the scope of market economies. The presence of a positive discount rate limits the time horizon of market economies to about one or two decades. That means the institution of a market is not responsive to environmental degradation. Decisions are based on short time scales determined by the discount rate excluding thus considerations extending further into the future. The absence of a process based understanding in economics renders attempts to define the spatial and temporal boundaries of an environmental problem quite futile – from an economic perspective. This comment will be dealt with in more detail in later sections.
Attempts to improve market failures arising from economic activities with respect to the environment are based in general on the internalization of external effects. External effects are defined as effects where the welfare of economic units is affected by the economic activities of other units in ways other than through markets. Since markets exchange information only via the price external effects may be internalized by introducing for example a tax. The most common approach to come to decisions in this kind of framework is based on cost benefit analysis (CBA). CBA is used for the appraisal of public sector investment projects and other aspects of public policy. The total social benefits from a project are compared with the social costs and a decision is taken on the project by the use of the decision rule: invest if the present value of benefits exceeds the costs. CBA was also applied to deal with mitigation options to prevent climate change (e.g., Munasinghe et al. [8]). The costs are defined as welfare forgone due to investments for measures of abatement. The benefits are defined as damage from climate change that is prevented by these measures. Quantitative estimates may be derived from a global welfare function with a single global utility maximizing decision maker (e.g., Nordhaus [9]). The problem with using a positive discount rate is obvious. Due to the time scales involved one has to deal with issues of intergenerational justice and equity that are neglected if potential future damage is discounted with the current market rate. However, the major problem of CBA is related to how it shapes the decision perspective based on traditional economic thinking that has several severe short-comings: ■
The system understanding is based on the assumption of an efficient equilibrium state implying that any measures beneficial for the environment cause costs that can only be justified by prevented damage.
240 POLYCENTRIC INTEGRATED ASSESSMENT
■
■ ■
Preferences and utility function are aggregated over a large number of actors to yield a representative agent. Evolutionary dynamics of socio-economic systems are neglected. There is little diversity in agents and their characteristics.
As a consequence, choice problems involving non-marginal changes in the structure of today’s economies, e.g., societal transitions involving institutional change are largely outside the realm of traditional economic approaches. We have to seek for a new understanding of individual and collective action and correspondingly institutional settings and evaluate their relevance for integrated assessment. Novel approaches to decision making Figure 11.1 represents the structure of reasoning relevant for decision making. The pathways of reasoning for the rational actor are depicted by bold arrows. More complex models account for the processes of change in preferences and perceptions.
Figure 11.1: Representation of the pathways relevant for decision making The pathways of reasoning characterizing the rational actor are depicted by bold arrows. More complex models account for the processes of changes in preferences and perceptions. Further explanation in the text.
The rational actor paradigm assumes for an agent infinite computational capabilities. Based on his subjective probabilities, an agent is able to derive the optimal decision optimizing his utility function in the decision space covering all possible choices and all possible states of the world. At the same time agents endowed with perfect foresight live in an extremely simple social world. Expectations about others are unambiguous, forecast is perfect and choice optimal. The individual with infinite computational capacity results in the simple equilibrium world. The imperfect social individual lives in the complex world of real human beings where expectations are contingent and path-dependent, where different perspectives and mental models exist. The
SCALING IN INTEGRATED ASSESSMENT 241
perception of reality, subjective judgements and probabilities are socially constructed to a large extent. Subjective probabilities narrowly defined in the micro-economic perception of bounded rationality depend only on the state of information [7, 10]. Two actors with the same state of knowledge should per definition have the same subjective probabilities (e.g., their subjective assessment of the market potential of a new product). In a more advanced perspective one has to acknowledge that the processing of information is inherently subjective. The thin arrows in Figure 11.1 indicate that attitudes, affects and motives influence subjective beliefs and preferences. The input and the processing of information is time and space dependent. Personal values, previous experience, the embedding in a social network define what one may call a cognitive and time-dependent filter for the acquisition and processing of information. To account for such processes in the representation of agents in models, a combination of approaches from logic and probabilistic theory are promising (e.g., Woolridge [11]). Knowledge may be represented with so-called belief networks that allow to represent uncertain, probabilistic knowledge and its dependence structure – e.g., causal and diagnostic reasoning. Figure 11.2 gives an overview of the processes involved in knowledge acquisition and decision making. A cognitive filter is responsible for information processing and for developing a subjective representation of beliefs about the world. As such the representation of knowledge does not yet imply the use of any cognitive theory. It provides a coherent framework for description that allows to derive a “taxonomy” of human behaviors: e.g., the goal directed planning engineer with rule governed behavior [12], the profit maximizing investor, the need satisfying and habit driven consumer [13]. Figure 11.2 indicates also that such an approach allows accounting for social interactions and dependence relations among agents. For an understanding of human-environment systems the interactions between individual agency and institutions governing social interactions are of major importance. Institutions are now a focus of research in different areas of the social sciences. New institutional economics focus on the importance of institutions for determining transaction costs and thus the competitiveness of different economic systems (e.g., Furubotin and Richter [14]). Institutional analysis in social science emphasizes the importance of institutions for patterns of societal communication, public choice processes, and the relationship between self and society (e.g., Bakker [15], Ostrom [16]). Studies in institutional analysis may provide one framework for integrating approaches from economics and other social sciences, an integration of major importance for integrated assessment in general, and integrated assessment modeling, in particular. An institution may be defined very broadly as shared rules of human conduct (e.g., Crawford and Ostrom [17]). Rules enable individuals to form expec-tations concerning the actions of others. For example, if one is driving on a road one expects other drivers to respect the red light and stop.
242 POLYCENTRIC INTEGRATED ASSESSMENT
Figure 11.2: A generic approach to represent decision making Using belief networks for the filtering and processing of information and probabilistic reasoning. Based on such an appraoch a “taxonomy” of different behaviors may be derived (e.g., rule based, habit driven, utility maximizing). As indicated the embedding of an individual in a social network will influence information processing by shaping the cognitive filter and decision making by shaping social rules and habits.
Without such shared rules of conduct life in a society would be impossible. Some insti-tutions (laws) are enforced by legislation (e.g., traffic regulations). Others (customs) are shared by the members of a society and evolve and change in a social setting (e.g., shake hands for welcome). Rules may focus on individual decision making (e.g., risk assessment) or operate at the level of society (e.g., policy networks). They may encompass regional, national or even international scales. The notion of scale, the integration of different scales of analysis is central to this approach. Trying to understand what is the impact of dealing with diverse “global change phenomena” at diverse levels of organization is currently one of the central tasks of institutional theorists studying global change processes [18, 19]. Cash and Moser [20] emphasized the need to develop adaptive assessment and management processes to link local and global scales. The implementation of institutional resource regimes for sustainable resource management often has to deal with the problem that the scale of established institutional settings does not match with the appropriate scale to manage the environmental system (e.g., a river basin).
SCALING IN INTEGRATED ASSESSMENT 243
How to design assessment processes, tools, methods that allow bridging different scales of analysis? How to take into account the importance of different institutional settings? Minsch et al. [21] point out the importance of institutional innovations and a polycentric understanding of policy making for sustainable development. Current institutions were designed for stabilizing a fast growing economy not for managing change in a saturated economy. Therefore major emphasis should be given to the analysis of institutions and patterns of change. A polycentric understanding of policy is based on the idea that decision making involves processes of social learning, the shaping of expectations. In a recent study for the Enquete Commission of the German Parliament Minsch et al. [21] emphasized the need for a shift from “What” to “How” in the sustainability debate. Such a shift can be interpreted as a shift from goal to a process based decision making, from hard to soft systems approaches in analyzing decision situations (introductions to the notion of soft systems analysis can be found in Checkland [22], Checkland and Scholes [23], Flood and Romm [24]). A soft systems approach implies the analysis of subjective perceptions of an ill-defined problem situation. In questions related to sustainability problems are often ill-defined. Perceived costs and benefits vary largely among the stakeholders involved. Arguments about refining goals may be quite futile if the uncertainties associated with the path to get there are very high. The costs may for example be path and scale dependent [25]. In these cases collective decision making is of major importance. Decisions may be guided by rules that are shared by a whole collective of agents. The shaping of collective expectations may be crucial to overcome lock-in situations – an issue that will be discussed in subsequent sections of the paper.
The Importance of Scales Environment human interactions across scales The first choice of a system analyst is the appropriate level of analysis in space and time given the problem under consideration. This is not trivial for the integrated assessment of environmental problems. The notion of a complex adaptive hierarchical system is used to exemplify how the appropriate level of analysis may be defined. Figure 11.3 shows the three level approach generally taken into consideration. The level of analysis can be distinguished by the “typical” time scale of the processes under consideration (e.g., day), the overall scale of analysis (e.g., a decade) and the grain of resolution (e.g., hour). This applies equally to other dimensions of space or categories (e.g., individual, group, population of a state). Boundary conditions are defined as slowly varying external variables (e.g., climate changing on a time scale of decades). Underlying processes are defined as processes that are very fast and may thus be considered at an aggregated, parameterized level (e.g., processes on a time scale of minutes).
244 POLYCENTRIC INTEGRATED ASSESSMENT
BOUNDARY CONDITIONS -> slow time scale >> τ
LEVEL OF ANALYSIS -> scale of investigation -> grain of resolution typical time scale τ
UNDERLYING PROCESSES -> fast time scale << τ Figure 11.3: Different levels in a hierarchical systems The level of analysis is characterized by a “typical” time scale τ.
A hierarchical approach is also of major importance when one deals with the notions of stability and variability. Any comments about stability must always be made within the scales of the problem specification and with respect to a specific stability concept. The sloppy use of the notions of stability and equilibrium caused many concerns in ecosystem research. Stability was claimed as desirable property for “healthy” ecosystems. This view has been replaced by a more balanced perspective taking into account the importance of scale and change in any natural system (e.g., Pahl-Wostl [26, 27]). Similar concerns arise in the description of humanenvironment systems in the choice of the appropriate system state of reference to analyze for example environmental change, adaptive responses or management strategies. Depending on the variable under consideration, the concepts of stability and equilibrium employed and the scale of analysis, a human-environment system may be perceived as being stable or not. Further it has to be emphasized that the concept of equilibrium in economics differs largely from the equilibrium concept for natural systems. Whereas the latter is derived from the description of fundamental processes in the system, an economic equilibrium is perceived as an optimal efficient state. The lack of taking dynamics into account leads to the puzzling situation that it is irrelevant how fast a market equilibrium may be obtained or what is the appropriate scale of analysis.
SCALING IN INTEGRATED ASSESSMENT 245
The fact that little attention has been paid to scaling issues in economics may be attributed to the need that the link between space and time requires a profound understanding of the underlying processes. The virtual absence of process based thinking in economics may therefore be a reason for this lack of focus. A notable exception is the more recent research in spatial economics (e.g., Fujita et al. [28]). The link between spatial and temporal scales can be expressed as follows: ■ ■
a process (social, biological) may be characterized by a certain time scale τ. a corresponding transport process may now be characterized by a certain spatial scale σ that is the spatial distance covered during the time period τ.
A typical example is the relationship between growth processes and diffusion in aquatic environments (e.g., Pahl-Wostl [26, 27]). The size of a patch depends on the spatial distance corresponding to the typical time scale of the growth process. Correspondingly – the spatial scale of the consequences of human action depend on the relationship between the time scale of the dynamic process and the spatial scale of the corresponding transport phenomena. Human activities have induced a speeding up of processes and a shift in the relationship between spatial and temporal scales. In particular in humanenvironment system major transfers relate to the exchange of non-material goods where space does not matter any longer. Global change proceeds at an unprecedented time scale. This causes a problem since the speed of global change (e.g., diffusion of lifestyle and technology) exceeds any adaptation potential of natural systems. It includes also certain opportunities since fast change also allows faster management response regarding human induced activities. Regarding the considerations made about the hierarchical nature of complex adaptive systems it implies that the separation of scales and the distinction of a level of analysis cannot easily be accomplished. Global decision processes/financial and material exchanges are not necessarily much slower than decision processes/financial and material exchanges at the regional scale. The scaling perspective is now analyzed in more detail for two different problem domains: ■ ■
climate change and the excess production of carbon dioxide sustainable water resource management.
Climate change and water resource management from a scaling perspective in time and space If one attempts to make a comparison between the typical scales of a phenomenon in space and time one needs to take into consideration that transport processes link regions over space and time. One has thus two aspects of importance – the size of the spatial domain that is covered by the transport process and the typical time scale of spatial transfers.
246 POLYCENTRIC INTEGRATED ASSESSMENT
(A) space global
state impact
national
regional
pressure
local 1
(B)
10
100
time [a]
space
national impact state
regional pressure local 1
10
time [a]
Figure 11.4: Comparison of the PSI sequence of the problem domains of (A) climate change and (B) water resource management in a space time perspective. Further explanation in the text.
Let us now compare climate change and carbon dioxide production with the issue of integrated water resource management by considering the scaling rela-tionships along the PSIR sequence pressure – state – impact – response. Figure 11.4 shows a comparison of the sequence PSI of the two problem domains from a space time perspective. The notions are defined as follows: pressure refers to human actions that cause a measurable change in the state of the environment. This change leads to impacts that are defined as longterm changes. ■
Climate change and CO2 production – pressure is inherently regional given by energy consumption and fossil fuel burning in different countries. Due to the fast global diffusion of technologies and life styles similar levels of
SCALING IN INTEGRATED ASSESSMENT 247
■
high energy consumption are adopted in most countries of the world. Due to the short time scale of mixing in the global atmospheric reservoir, regional carbon dioxide production is integrated at the global scale within a few years. Hence, the decisive change in the state of the environmental system is global. The impacts of “global” climate change are long-term and will be experienced at regional scales caused by the regional manifestations of climate change. The most serious impacts are expected in developing countries that have contributed little to the overall problem of climate change. This causes serious concerns of equity and justice. Quantity and quality of water resources. Pressure is inherently local and short term (e.g., high water consumption, fertilizer use in agriculture). The state of the available water resource in terms of quantity and quality may be affected very fast. The environmental reservoirs of importance are regional aquifers. Large rivers are responsible for directed transport processes across wider spatial distances. Impacts such as the depletion of an aquifer or groundwater pollution are thus experienced mid-term at regional scales. Natural processes cause the uneven distribution of precipitation and water availability. Regional water scarcity problems are not counteracted by any transport phenomena related to the global hydrological cycle. Transboundary transport processes may lead to pollution effects (e.g., acid rain). In general, the whole problem domain is, however, much more localized.
The design of integrated assessment processes and the development of management strategies have to take into consideration the importance of scale, the importance of a polycentric approach: ■
■
■
■
■
An assessment of climate change requires linking regional response options with global policy processes. An assessment of regional water scarcity requires searching for a global process to cope with regional problems of water shortage, a process that directs water flow to balance the natural pattern of a very uneven distribution of water resources. The fast global diffusion of technologies and life-styles poses major challenges for both the assessment of climate change and the management of water resources. An assessment of climate change has to take into consideration that due to threshold effects and irreversible non-linear responses of the climate system precautionary action is a necessity. Adopting a reactive mode may be associated with high risks. An assessment of water scarcity has to take into consideration that the temporal scales and the recovery times regarding the dynamics of water resources are highly uncertain. Also here the adoption of a reactive mode is associated with high risks.
248 POLYCENTRIC INTEGRATED ASSESSMENT
Climate Change and Beyond Given the nature of human induced climate change it is quite evident that the problem itself can only be tackled at a global scale. However, meanwhile it is also evident that early work in integrated assessment typically represented by the DICE model with a global welfare function and a single decision maker does not provide an appropriate decision perspective. Morgan and Dowlatabadi [4] summarized insights from many years of integrated assessment on global climate change. In particular they emphasized that many decisions will be made by the individual choices of millions of organizations and citizens, and these will be driven by local interests and conditions. The climate decision makers are diffuse groups spread all over the globe who will make a number of sequential climate-related decisions that are primarily driven by local nonclimate considerations. Dealing with the climate change decision problem requires thus a complex process of decision making bridging scales in time, space and institutional settings. Cash and Moser [20] emphasized the need to develop adaptive assessment and management strategies. This is in line with a shift from a goal-based, optimization framework, to a process based multiscale approach. This implies identifying a long-term target/vision (e.g., a low energy society) and short-term options that trigger the movement into the direction of the target in a fashion of sequential decision making. It might be useful to clarify the difference between a goal and a process based approach guided by a target/vision as used here. A vision refers to a moving target guiding the self-organizing, innovative forces of a society, forces that otherwise would remain diffuse. It differs from a goal in that it is a tangible image of a future society without being subject to fierce arguments about exact definitions that characterize the operationalization of goals. An example for fierce discussions about goals is provided by the arguments about the targets (e.g., 5% reduction) for CO2-emissions. Given the huge uncertainties surrounding the costs of the different implementation strategies some of these discussions have to be judged as futile. A vision is comprehensive and synthesizes different goals and aspirations. That implies an embedding of climate policy in a wider range of societal concerns. Such an embedding is particularly important for considering mitigation options at the regional scale. At this scale the most common approach is to consider adaptation options only. Isolated regional action reducing carbon dioxide emissions cannot prevent regional damage from climate change caused by the global carbon dioxide budget. Hence traditional arguments of cost benefit considerations lead to the conclusion that the costs for investments into mitigation options at a regional scale cannot be justified since climate is a common good and benefits will be global [29]. However, regional action will be decisive for action at the global scale. Inherently climate policy will not be a top down process where global agreements will enforce a cascade of corresponding policies at national and regional scales. Nor will it be a bottom up process where in a type of “world movement” a
SCALING IN INTEGRATED ASSESSMENT 249
new life style will emerge and spread over the globe as a whole. It will rather be an iterative process where top down and bottom up forces will mutually reinforce or as well block each other. Obviously an assessment process should aim at fostering a reinforcement and at preventing a blocking. How can one bridge scales from the individual citizen making choices in his/her individual life style to the global policy process and the Kyoto protocol? Such questions were addressed within the CLEAR project (CLimate and Environment in Alpine Regions) in a participatory integrated assessment from a regional perspective [25]. The research focussed on the individual and his/her role as consumer adopting new products and making choices in individual life-style and as citizen participating in democratic processes. The direct democracy in Switzerland with its specific participation of citizens in decision processes provides an excellent environment to study such approaches. The participatory integrated assessment used the method of citizen focus groups. A specific model and information platform on climate impacts and options was developed to inform the assessment process. Two models were developed that addressed mitigation options. An individual energy demand calculator emphasized options at the level of life-style choices. The options model was designed to reflect a polycentric understanding of policymaking by addressing a whole range of political measures at different levels of societal organization and different institutional settings – public policy and measures that aim at changing informal rules and social norms. Figure 11.5 shows the list of options that were addressed in discussions with participants of citizen focus groups. We noted two major limitations: ■
■
bridging scales from the individual to a global policy process is not trivial. The empowerment of citizens requires that they are able to identify options in their individual area of decision making and to realize their important role as individuals in a larger context without feeling responsible for the problem as a whole [30]. accounting for the combined effect of different types of measures that bridge scales and institutional settings (e.g., the combination of a tax and measures for education of the public) and for uncertainties in such projections is currently impossible given the analytical frameworks available. The discussion about the combination of different categories of measures had thus to remain largely at a qualitative level.
Changes in rules, norms, and shared habits are difficult to address. Given the current limitations of analytical frameworks to deal with such options the following needs are identified: ■
■
a modeling approach that can account for cognitive and social aspects of human behavior, for different modes of communication and interaction. a participatory model building to deal with uncertainties and aspects of a socially constructed reality that have to be addressed.
250 POLYCENTRIC INTEGRATED ASSESSMENT
Figure 11.5: Overview of the measures catalogue included in OPTIONS A module of the CLEAR information platform developed for the participatory integrated assessment with citizen focusgroups (see Pahl-Wostl et al. [25]).
A socially constructed reality refers here to the mutual shaping of expectations. Expectations may trigger and stabilize a certain behavioral pattern and a development trajectory of a system as a whole. In climate change one is interested in exploring development trajectories that decouple economic growth from the degradation of the environment. Evolutionary systems dynamics uses the metaphor of walking on a fitness landscape to describe the state space of a dynamic system (e.g., Pahl-Wostl [26], Kauffman [31]). The fitness landscape may be rugged or smooth. It may change its shape over time due to the consequences of walking on it! If socio-economic development is perceived as a local search process on such a fitness landscape rather than as a global optimization process, the search path is highly dependent on past development and contingent on the perceptions of the stakeholder groups involved.
SCALING IN INTEGRATED ASSESSMENT 251
costs
∆c1
∆c2
spread of innovation Figure 11.6: Lock-in effect preventing the spread of an innovation (e.g., new technologies in mobility). The shape of the cost curve is of major importance for the transition from one regime to the next.
One may encounter lock in effects where the search may be constrained within the boundaries of the current attractor of system behavior. Figure 11.6 shows a typical cost curve characterizing a lock in effect that prevents the spread of an innovation. The shape of the cost curve is of major importance for any transition to be accomplished. ∆c1 refers to the height of the cost threshold to be overcome. ∆c2 refers to the potential decrease in overall costs once an innovation has spread over the whole system of interest. Costs refer here to an aggregate for an economic system, a community as a whole. They may refer to the scale dependent price of a new technology, the costs associated with the learning of new skills for manufacturing and handling, etc. For an integrated assessment it is crucial to take into account that: ■
■
■
Costs are path dependent and depend on the expectations and the patterns of choices made by the different individuals and stakeholder groups involved [25]. The shape of the aggregated cost curve and the cost curves for individual groups may be scale dependent (see also Cash and Moser [20]). Individual decisions are not based on aggregated costs but on the costs perceived by different stakeholder groups.
A most obvious example for such a lock-in effect is given by patterns of energy consumption, in general, and mobility behavior, in particular. A comparison among different OECD countries is already quite informative. A US citizen
252 POLYCENTRIC INTEGRATED ASSESSMENT
has an average energy consumption of about 10000 Watts1. In most European countries the average per capita energy consumption is at a level of about 6000 Watts. Energy efficiency defined as GDP/Watt is much higher in European countries. One reason can be attributed to the fact that the US society has largely developed during the time when the automobile was already available. This has fostered the adoption of a certain type of infrastructure and of a highly energy intensive pattern of mobility. The break out of such a lock-in situation requires a concerted action comprising national action in legislation and investment strategies, regional demographic planning, and habit breaking of consumers who are entirely adapted to a certain mobility behavior. Regarding habit breaking it is interesting to note that in Switzerland an increasing number of people live in car-free households. However, giving up the car is hardly ever a conscious decision. Due to changes in personal circumstances consumers may be forced to explore a new type of mobility behavior and discover its positive benefits. Hence their previous behavior did not reflect their optimal choice. This is an example for a lock in effect at local scale. Habits reduce costs associated with information seeking and processing, with making conscious decisions [13]. Habits are stabilized by social acceptance and lead their adopters to an identification within a social group. Novel modes of behavior emerge slowly. The dynamics of changes in consumer preferences is an important but largely unexplored area of research. It involves processes of individual and collective learning. If one accounts for the richness of cognitive behavior at the level of the individual, scaling up is not easily accomplished [1, 13]. This poses challenges for modeling at different scales. In any system, the diversity in the characteristics of individuals and complex patterns of social interactions renders aggregation difficult (e.g., Pahl-Wostl [26]). Can behavior at the level of an aggregated consumer group be described by the same approach chosen at the level of the individual? This is the current practice of the “representative agent device” used in many CGE models, a practice that is increasingly criticized (review in Leitner et al. [33]). Analytical approaches to deal with aggregation already meet their limits with the much simpler situation of consumers obeying the rational actor paradigm. A promising possibility towards deriving descriptions across a range of scales is by way of comprehensive and rigorous simulations with agent based modeling frameworks [1].
Sustainable Management of Water Resources Contrasting with the issue of climate change where the request for action is largely determined by concerns about future damage, water resource management has to deal with severe current problems of shortage and pollution in 1
Energy consumption is expressed here in units of a power as suggested by Imboden and Jaeger [32].
SCALING IN INTEGRATED ASSESSMENT 253
different parts of the world. The situation will aggravate in the future if current practices continue (e.g., Cosgrove and Rijsberman [34]). Traditionally water related problems have been approached from a rather narrow and fragmented perspective. The EU water policy with its numerous directives targeted at single issues in isolation is a prime example of a fragmented water policy [35]. With the advent of the new European Water Framework Directive the situation changes drastically. With the European Water Framework Directive European water policy adopts a more polycentric approach. Of particular relevance for our considerations is the integration of the previously fragmented European water policy, the participation of stakeholders in adopting the management plans and the introduction of the river basin as the primary management unit. The idea of integrated management at the basin scale has increasingly gained importance. River basins are the natural context for water resource management. They are defined by the watershed limits of a system of waters flowing into a common destination. It is not always trivial to clearly define river basins – in particular in the area of estuaries. Further, man made flows (e.g., water supply, canals) may induce transfers from one river basin to another. Despite these difficulties in clearly delineating system boundaries from an environmental perspective, river basins are important management units that are increasingly adopted in many countries in the world. This is a major advance in comparison to previous approaches. However, the basin scale hardly ever coincides with the boundaries of institutional settings. Only in a small number of countries river basin management schemes are in operation. Stakeholders are in general not organized at a basin scale since it is not a scale of social organization. The need for institutional change and inno-vation is obvious. In the following two different topics are addressed to show the importance of scales and a polycentric approach to policy making for water resource management: ■ ■
Transformation processes and lock-in effects as a function of scale. Market based institutions from the local to the global scale.
Societal transitions and lock-in effects The interaction between a socio-economic system and water resources is largely dependent on the technologies and institutional resource regimes at the interface. Technologies are broadly defined to comprise not only a specific technique but the whole pattern of institutional settings. Lock-in effects, discussed already in the context of the issue of climate change, arise here as well. An example is given by urban water management. Figure 11.7 shows different interdependent components stabilizing the current system of water supply, in particular, and urban water management, in general, in many OECD countries. A lock-in situation arises due to the long lifetimes and the high fixed costs of infrastructure. Rules of good practice in the engineering
254 POLYCENTRIC INTEGRATED ASSESSMENT
Figure 11.7: Determinants for lock-in effects in urban water management.
community, consumer habits and institutional inertia are further impediments to change. Such a situation has been explored for the city of Zürich where the planning of supply capacities does not reflect the need to improve flexibility for responding to change [12, 36]. Risks are prevented by high investments into technology and the establishment of several layers of security. Efficiency in both economic and ecological terms is unsatisfactory. Also here, the adoption of new technologies and new institutional arrangements is a process that can only be accomplished in a concerted action involving different stakeholder groups. In particular, citizens will play a more important role – as consumers making technological choices and as citizens making political choices. However, the adoption of innovations is also prevented by the high degree of fragmentation of the water sector. Figure 11.8 shows an overview of important parts of the coupled urban-rural system. Agriculture may pollute the groundwater resources. The problem of nitrate pollution of groundwater resources and thus drinking water supplies is nowadays a pressing problem in many regions with intensive agricultural activities. Once the nitrate concentration in the well exceeds the limit, it may take a decade or more until a reduction in the amount of fertilizer application will have an effect. We encounter here a mismatch between time scales of pollution and effect. The soil reservoir has a memory of several years until the polluting effect of the water resource can be detected in the well. The system then retains a “memory” over years even when the response options at the level of the original source, agricultural practices, are drastic and immediate. In general, the immediate response strategy is either the closing down of wells for drinking water supply or the purification of drinking water with highly sophisticated and expensive technologies. Further, waste water treatment plants release additional nutrients into the environment. Each of these fields of activity is dealt with in isolation. A more comprehensive approach should take the perspective of the system as a whole and make, for example, the attempt to close nutrient cycles. However, costs and benefits vary as a function of scale. Hence one may pose the question – what is the appropriate scale to introduce an innovation?
SCALING IN INTEGRATED ASSESSMENT 255
WATER SUPPLY
artifical fertilizer
Supply Management
AGRICULTURE CONSUMER pollution
Groundwater
TECHNOLOGY MARKET
Water Treatment
Waste Water Treatment Environment
Figure 11.8: Some urban-rural couplings in urban water management A problem to innovation arises since the current institutional setting does not support to develop a common strategy for an urban-rural system and lock-in effects prevent change.
Further uncertainties in the assessment are huge. Whereas uncertainties in the dynamics of the natural resource have received a lot of attention in groundwater modeling, uncertainties in the dynamics of human behavior have largely been neglected. Currently prevailing attitudes and technological systems spread over the whole world at an unprecedented pace. Big water companies transfer western technologies to all large cities of the world. Given the knowledge about the emergence of lock-in situations this can hardly be perceived as a desirable development. Market based institutions from local to global scales The market is in general perceived as the single best institution for the efficient allocation of a scarce resource. This is one reason why regulatory reform [37] and in particular market based approaches in water resource management are receiving increasing attention [38, 39]. A multiscale approach seems to be warranted that takes additional processes such as rule governed behavior, complex patterns of interactions, imbalances between supply and demand, and negotiations into account. Let us briefly address this issue by looking at different scales of the problem of water scarcity: ■
The local scale of the community → Allocation of water among different groups within a local community. The members may have different access to water. Allocation and communication patterns may be governed by
256 POLYCENTRIC INTEGRATED ASSESSMENT
■
■
■
cultural perspectives, and informal rules within a community. The empowerment of the different social groups needs careful consi-derations. The regional scale of a province ◊ Allocation of water among different user groups – e.g., domestic and industrial demand, and agricultural use for irrigation. In such settings the spontaneous emergence of informal water markets has proven to be quite likely. This is the scale where new technologies may be adopted – given that their market potential and thus price have been judged from a wider perspective. The scale of a large transnational river basins ◊ Allocation of water among different areas within the basin with different water availability, e.g., up- and downstream areas. Transnational management schemes and formal arrangements for trading water rights need to establish efficient institutional settings that should prevent transaction costs from becoming too high. The global scale ◊ Allocation of water among different regions / nations of the world with different levels of water availability and water scarcity. The appropriate commodity for trading is given by food and thus virtual water. The local production of food would minimize transaction costs. However, institutional arrangements and coordinated investment strategies are largely missing. This is an area of major interest in current research.
An important field of action is located at the global scale by breaking one global tendency and by fostering the emergence of another. There is a need to break global tendencies of the diffusion of uniform technologies and to foster the development of technological solutions and institutional settings adapted to the characteristics of a region. The global patterns of food trade should be exploited to equilibrate differences in water supply across regions by indirect imports of water as virtual water in food (e.g., Cosgrove and Rijsberman [34], Zehnder [40]). Linking food and water policy may also help to overcome the fragmentation of institutions in the water sector. However, it is also evident that the management of water resources from the regional to the global scale requires a multi-scale perspective and modeling approaches that allow a nested approach to deal with the problem and institutional interactions across scales. Informal markets may emerge spontaneously as spot markets with water as commodity. More formally, one may introduce water rights and in the next step water markets with tradable water rights. However, limiting the perception of market based institutions to a narrow microeconomic perspective and to price based allocation mechanisms is insufficient. Questions of kinship, access to water, power relationships need to be taken into account (e.g., Bruns and Meinzen-Dick [41]). They operate at different levels of societal organization and cannot easily be accounted for in the traditional market approach. Informal social norms at the local level could be much more important than (formal) property rights and public policies for explaining (but also for modifying!) the behavior of water users. There is an analytical/empirical
SCALING IN INTEGRATED ASSESSMENT 257
need to know more about these social norms; but there is also a political/practical need for information/persuasion instruments in order to modify users behavior. Thus, “management strategies” (“policy tools”) must also take into account information/persuasion instruments (and not only planning, economic incentives and legal prescriptions). All in all, we advocate that cognitive elements (from individual perception of water problem to the assessment of the effects of more integrated water management) should be analyzed in more detail.
Summary and Challenges for Integrated Assessment Assessments related to global change have to address decision problems in a polycentric approach bridging a range of different scales in space and time and levels of societal organization, dealing with different institutional settings that may not easily be scaled up or down. The analysis of the structure and dynamics of informal social rules and norms may be equally or even more important than the analysis of regulatory frameworks. This insight is based on a new understanding of policy making and institutional dynamics. One conclusion is the insight for the need to improve the representation of the human dimension in both integrated assessment models and processes. We advocate participatory agent based social simulation as new approach to account for the human dimension in such a polycentric approach to integrated assessment. Agent based modelling (ABM) is a broad term that embraces a wide range of approaches from computational economics, cognitive psychology, artificial intelligence and computer science. ABM allows representing decision making processes explicitely and accounting for the dynamic behaviour of socioeconomic systems. An agent may represent an individual and/or an organization (e.g., an association, the government). Processes of scaling up and down to represent decision making processes at different levels of aggregation are major research questions. Up to now no coherent approach has emerged. This may also not be warranted at the current exploratory stage where a number of different approaches should be followed. ABMs are particulary suited to be applied in participatory settings since they allow representing decision making processes in a more realistic fashion. Starting from stakeholder perspectives means to really include the human dimension into integrated assessment processes. The building and application of models in participatory settings is of particular importance if uncertainties and decision stakes are high [42]. An improved understanding of human environment systems can only be achieved by linking theoretical and applied research, by linking approaches focusing on agents (representation of individual human actors with cognitive function of varying complexity) and approaches focusing on system behavior (interaction of agents, institutional change). Figure 11.9 sketches the main
258 POLYCENTRIC INTEGRATED ASSESSMENT
areas of research. Starting from the focus of complex individual agents and acknowledging cognition as a source for complexity and uncertainty is rather new. The system’s perspective has a longer tradition. Complexity arises from agents’ interactions in social networks. There is a certain tradeoff between making individual agents very complex and investigating the dynamics that arise from agents’ interactions. However, to understand the emergence of norms and the dynamics of institutions one has to take into account the embededness of indviduals in social networks and the internal representation of institutions (e.g., shared norms, rules) in an individual’s mind.
Figure 11.9: Different areas of research in agent based modeling that should be explored simultaneously.
A new generation of models is required that allows a nested representation of different scales of analysis – local – regional – global and the investigation of the shaping of expectations across scales. Using agent based models, this requires improving the understanding for the representation of agent behavior at different levels of aggregation. Given the high degree of uncertainty and the decision stakes involved, the importance of participatory model development and application cannot be overstated [1, 43]. In a polycentric approach to integrated assessment model building and development is an essential part of the assessment process providing thus major challenges to validation. Models should constrain the space of plausible future scenarios and provide a quantitative base wherever appropriate. At the same time they should allow exploring the whole range of plausible scenarios and the indeterminacies that emerge from the degrees of freedom inherent in human choice.
SCALING IN INTEGRATED ASSESSMENT 259
References 1. 2. 3.
4. 5. 6. 7. 8.
9. 10. 11. 12.
13. 14. 15.
16. 17.
Moss, S., C. Pahl-Wostl, and T. Downing, 2001. “Agent based integrated assessment modeling.” Integrated Assessment, 2: 17–30. Rotmans, J., 1998. Methods for IA: “The challenges and opportunities ahead.” Environmental Modeling and Assessment, 3: 155–179. Rotmans, J., and H. Dowlatabadi, 1998. Integrated Assessment modeling. In: S. Rayner and E. Malone (eds.). Human Choice and Climate Change: The Tools for Policy Analysis. Washington: Battelle Press. Morgan, G. M., and H. Dowlatabadi, 1996. “Learning from integrated assessment of climate change.” Climatic Change, 34: 337–368. Pappi, F. U., 1999. “Netzwerke zwischen Staat und Macht und zwischen Theorie und Methode.’ Soziologische Revue, 22: 293–300. Bressers, H., L. J. O’Toole, and J. Richardson (eds.), 1995. Networks for Water Policy: a comparative perspective. London: Frank Cass. Kreps, D., 1988. Notes on the Theory of Choice. Boulder: Westview Press. Munasinghe, M., P. Meier, M. Hoel, S. W. Hong, and H. A. Aaheim, 1996. Applicability of Techniques of Cost-Benefit Analysis to Climate Change. In: J. P. Bruce, H. Lee, and E. F. Haites. (eds.). Climate Change 1995: Economic and Social Dimensions of Climate Change. Contribution of Working Group III to the Second Assessment Report of the IPCC. Cambridge: Cambridge University Press: 145–178. Nordhaus, W., 1994. Managing the Global Commons. The Economics of Climate Change. Cambridge, MA: MIT Press. Kreps, D., 1990. A Course in Microeconomic Theory. Cambridge: Cambridge University Press. Wooldridge, M., 2000. Reasoning About ratiOnal Agents. Cambridge, MA: MIT Press. Tillman, D., T. Larsen, C. Pahl-Wostl, and W. Gujer, 1999. Modeling the actors in water supply systems. Water Science and Technology, 39: 203–211. Kottonau, J., J. Burse, and C. Pahl-Wostl. Submitted. “Stimulating the Formation of Attitude Strength: a memory based cognitive architecture.” Furubotin, E. G., and R. Richter, 2000. Institutions and Economic Theory. Ann Arbor: The University of Michigan Press. Bakker, K., (ed.), 1999. Societal and Institutional Responses to Climate Change and Climatic Hazards: Managing Changing Flood and Drought Risk: A Framework for Institutional Analysis. SIRCH Working Paper, No 3. Ostrom, E., 2000. “Collective Action and the Evolution of Social Norms.” Journal of Economic Perspectives, 14: 137–158. Crawford, S., and E. Ostrom, 1995. “A grammar of institions.” American Political Science Review, 89: 582–600.
260 POLYCENTRIC INTEGRATED ASSESSMENT
18. Gibson, C., E. Ostrom, and T-K. Ahn, 1998. Scaling Issues in the Social Sciences. IHDP Working Paper No 1. IHPD, Bonn. 19. Young, O. A., A. Agrawal, L. A. King, P. H. Sand, A. Underdal, and M. Wasson, 1999. Institutional Dimensions of Global Environmental Change. IHDP Report No. 9. IHDP, Bonn, Germany. 20. Cash, D. W., and S. C. Moser, 2000. “Linking global and local scales: designing dynamic assessment and management processes.” Global Environmental Change, 10: 109–120. 21. Minsch, J., P-H. Feindt, H-P. Meister, U. Schneidewind, and T. Schulz, 1998. Institutionelle Reformen für eine Politik der Nachhaltigkeit. Berlin: Springer. 22. Checkland, P., 1993. Systems Thinking, Systems Practice. Chichester: Wiley. 23. Checkland, P., and J. Scholes, 1990. Soft systems methodology in action. New York: Wiley. 24. Flood, R. L., and N. R. Romm (eds.), 1996. Critical Systems Thinking: Current Research and Practice. New York: Plenum Press. 25. Pahl-Wostl, C., C. Schlumpf, A. Schönborn, M. Büssenschütt, and J. Burse, 2000. “Models at the interface between science and society: impacts and options.” Integrated Assessment 1: 267–280. 26. Pahl-Wostl, C., 1995. The dynamic nature of ecosystems: Chaos and order entwined. Chichester: Wiley. 27. Pahl-Wostl, C., 1998. Ecosystem Organization Across a Continuum of Scales: A Comparative Analysis of Lakes and Rivers. In: D. Peterson and T. Parker (eds.). Scale Issues in Ecology. NY: Columbia University Press: 141–170. 28. Fujita, M., P. Krugman, and A. Venables, 1999. The spatial economy. Cambridge, MA: MIT Press. 29. OcCC, Organe cosultatif en matière de recherche sur le climat et les changements climatiques, 1998. Auswirkungen von extremen Niederschlags-ereignissen, Sekretariat OcCC, Bern: ProClim. 30. Schlumpf, C., J. Behringer, G. Dürrenberger, and C. Pahl-Wostl, 1999. “The personal CO2-calculator: A modeling tool for participatory integrated assessment methods.” Environmental Modeling and Assessment, 4: 1–12. 31. Kauffman, S., 1993. The Origins of Order. New York: ProClim. 32. Imboden, D. M., and C. C. Jaeger, 1998. Energy and Environment in the Future. Contribution to OECD-Conference on: Energy: The next fifty years. Paris, July 7/8 1998. 33. Leitner, S., S. DeCanio, and I. Peters, 2001. Incorporating behavioral, social, and organizational phenomena in the assessment of climate change mitigation options. In E. Jochem J. A. Sathaye, and D. Bouille (eds.). Proceedings of the IPCC Expert Meeting on Conceptual Frameworks for Mitigation Assessment From the Perspective of Social Science. London: Kluwer Academic Publishers.
SCALING IN INTEGRATED ASSESSMENT 261
34. Cosgrove, W. J., and F. R. Rijsberman, 2000. World Water Vision: Making Water Everybody’s Business. World Water Council. London: Earthscan Publications. 35. Nunes Correia, F. R., and R. A. Kraemer (eds.), 1997. Dimensionen europäischer Wasserpolitik. Eurowater/Länderarbeitsgemeinschaft Wasser. Berlin: Springer. 36. Tillman, D., T. Larsen, C. Pahl-Wostl, and W. Gujer, 2000. “Interaction analysis of the stake-holders in watersupply systems.” Water Science and Technology, 43: 319–326. 37. Spulber, N., and A. Sabbaghi, 1998. Economics of Water Resources: From Regulation to Privatization. Dordrecht, The Netherlands: Kluwer. 38. Easter, K. W., M. W. Rosegrant, and A. Dinar, 1998. Markets for Water: Potential and Performance. Dordrecht, The Netherlands: Kluwer. 39. Easter, K. W., M. W. Rosegrant, and A. Dinar, 1999. Formal and “Informal Markets for Water: Institutions, Performance, and Constraints.” The World Bank Observer, 14: 99–116. 40. Zehnder, A., 1999. Water use and food production – an international collaboration? EAWAG News, 46: 1–3. 41. Bruns, B. R., and R. Meinzen–Dick (eds.), 2000. Negotiating water rights. International Food Policy Research Institute. New Delhi: Vistaar Publications. 42. Funtowicz, S., and J. Ravetz, 1993. “Science for the post-normal age.” Futures, 25: 735–755. 43. Pahl-Wostl, C., M. van Asselt, C. Jaeger, S. Rayner, C. Schaer, D. Imboden, and A. Vckovski, 1998. Integrated Assessment of Climate Change and the Problem of Indeterminacy. In: P. Cebon, U. Dahinden, H. Davies, D. Imboden, and C. Jaeger (eds.). Views from the Alps: Regional Perspectives on Climate Change. Cambridge, Massachusetts: The MIT Press.
12 Emergent Properties of Scale in Global Environmental Modeling – Are There Any? 1
2
WILLIAM E. EASTERLING AND KASPER KOK 1 Department of Geography and Center for Integrated Regional Assessment, The Pennsylvania State University, United States 2 International Centre for Integrative Studies, University of Maastricht, The Netherlands
Abstract This essay argues that much of the concern over issues of scale in the modeling of complex human-environment systems – of which integrated assessment models are a special case – tends to be preoccupied with bottomup aggregation and top-down disaggregation. Deep analysis of the underlying explanation of scale is missing. One of the intriguing propositions of complex systems theory is the emergence of new structures at a high level of scale that are difficult if not impossible to predict from constituent parts. Emergent properties are not the mysterious creation of “new material” in the system, but rather the placement of the components of the system into their logical contexts (scales) so that the observer/modeler can see structures arise from them for the first time. The stochastic interaction among low-level elements that gives rise to emergent properties may be part of a larger process of selforganization in hierarchical systems. Self-organization and attendant emergent properties constrain low-level elements through a network of downwardly propagating positive feedbacks. Those feedbacks not only tend to hold the system in a temporary stable state, but they also render it vulnerable to radical reorganization by rapid external forcing. The vulnerability of the USA agricultural production system to climate change is given as an example of how a self-organizing, hierarchical system paradoxically may become susceptible to large external shocks as a result of the emergence of high-level structures that seek to protect its low-level components from short-term variability. Simulations of changes in Honduran maize production in the aftermath of Hurricane Mitch using the CLUE land use model demonstrate the influence of multi-scale complexity on the resilience of land use after
264 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
disturbance. Finally, it is argued that improved understanding of emergent properties of scale may give fundamental insight into the conditions of surprise.
Scales in Climate Change Impact Assessment The quest for new scientific knowledge cannot escape certain dualities such as cause-effect, model-subject, and observer-object [1]. These dualities condition how we learn, what we learn, how we express the results and what we do with them afterward. They are the human constructs that are used to distinguish order from disorder. Scale1 is a construct that permits the observer to locate self relative to a set of objects distributed in space, time and magnitude. It explains nothing in and of itself, but its perspective facilitates the discovery of pattern and process [2, 3]. To examine issues of scale in the epistemology of system behavior is to stray away from reductionism and toward the understanding of the relations between the components of a system and the system as a whole. The goal of this essay is to reflect on whether the concept of emergent properties of scale – real or imagined – is a useful construct to guide the development of models of global human-environment systems that consist of regional components and sub-components. While their scale dimensions most uniquely characterize problems of global environmental change, very little deep analysis of the meaning of scale has been applied to the resolution (usually by modeling) of those problems [4]. Scale surfaces mostly as a practical modeling problem of scaling up from the very small to the very large or scaling down from the very large to the very small [5]. Scaling up and scaling down raises another duality of global environmental change research captured in the two modeling paradigms – “bottom-up” and “top-down” – that dominate the field of integrated assessment modeling (IAM), especially as pertains to the simulation of biophysical and related human response to climate change [6]. The reference point for both paradigms is spatial and temporal scale. A bottom-up approach is typified by process-level simulation of biophysical response to a change in climate variables across a range of “representative” modeling sites. The results are then passed to an integrative model (usually economic) of the region or globe that contains those sites in order to deliver a quantity that has policy relevance (e.g., Rosenberg [7], Parry et al. [8, 9], Rosenzweig and Parry [10]).
1
We use Gibson et al.’s [2] taxonomy of scale-related terms for this paper. Scale refers to the spatial or temporal dimensions used to measure phenomena. Extent is the size of the spatial or temporal dimension of a scale. Resolution is the precision of measurement of objects of a scale. Levels are the units of analysis located at the same position on a scale.
SCALING IN INTEGRATED ASSESSMENT 265
A top-down approach is typified by the development of reduced-form relations between climate, biophysical, and socioeconomic variables that are estimated (often econometrically) from data pooled at regional levels in order to estimate global impact directly [11, 12, 13, 14]. Also included in this class are Ricardian (or “ergodic” according to Schneider et al. [15]) economic modeling approaches that statistically relate climate variables to land rents in cross-section (i.e. across regions at one point in time) in order to estimate national impact and adaptation [16, 17]. Both paradigms incorporate the results of large-scale general circulation model (GCM) experiments of climate change in order to simulate impacts. Mismatches in scale resolution between systems being modeled – illustrated best between GCM results with a resolution of hundreds of kilometers and their recipient site-specific process models with resolutions of a few meters – call into question the reliability of the simulated impacts [18]. Bottom-up approaches are conducive to the construction of regional profiles of climate change impacts from detailed process studies; important climate effects are often portrayed mechanistically and adaptive response can be tested in controlled sensitivity experiments [19]. Top-down approaches, particularly global IAMs, permit the global change problem to be represented as a tightly coupled biophysical and social system with explicit linkages and feedbacks among components. Very little is exogenous. Their use of statistical aggregates in the modeling of system components allows estimation of whole system adaptation rather than an ad hoc sampling of adaptive strategies as in the case of bottom-up approaches. Scale-related criticisms apply to both paradigms. Bottom-up approaches are criticized for the crudeness by which site-specific model results are aggregated to derive regional estimates [5, 19]. Linearity among scales is often presumed in the aggregation of site model results. Assumptions of linearity imposed by the averaging of nonlinear relations across multiple sites in space may result in substantial aggregation error illustrated as in Figure 12.1. Accordingly, the greater the non-linearity is, the greater the aggregation error. Top-down approaches are criticized for their generality and the loss of regional detail that may obscure important distributional features of climate change impacts. Application of the results of top-down modeling indiscriminately to constituent regions risks the ecological fallacy [5]. We assert that scale-related problems associated with either approach are much more fundamental than those described above; such are merely symptomatic of a deeper failure to account for the inherent complexity of the entire system being modeled. We assume, for purposes of discussion, that most human-environment systems (defined below) that are of interest to global environmental change researchers are complex. That is, total system behavior cannot reliably be predicted by linear combinations of the system’s (microscopic) sub-components [20]. Moreover, behavior in system sub-components may be constrained or controlled by larger (macroscopic) components. By
266 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Figure 12.1: Hypothetical aggregation error by up-scaling non-linear relations between crop yield and precipitation (Source: Easterling [19]).
extension, the difficulty of whole system predictability from below and the potential existence of control structures from above indicate emergent properties a priori. Such properties may arise by collective physical or biological processes, or by collective institutional thought. Moreover, they may be important levers to the understanding and manipulation of system behavior. In the remainder of this paper, we develop the argument that complex human-environment systems are hierarchical. We then review the concept of emer-gence within the context of scale and suggest a theoretical basis for emergent properties of scale based on observed self-organizing traits of hierarchical complex systems. Two applications of our theoretical reasoning are presented. First, we discuss the role of scale emergence of institutional structures that may increase the vulnerability of the U.S. agricultural system to climate change. Second, we describe the process of agroecosystem reorganization in the aftermath of Hurricane Mitch in Honduras as an example of the dissolution of vulnerable emergent structures as the result of a large external forcing. Finally, we argue that proper representation of scale-related emergence in models of complex, hierarchically structured human-environment systems may improve the capability of such models to anticipate surprises in store from global environmental change.2 2
Our arguments are developed standing shamelessly on the shoulders of pioneers in systems analysis – especially the work of Priogogine, Weinberg and Boulding – and,
SCALING IN INTEGRATED ASSESSMENT 267
Human-Environment Systems as Complex and Hierarchical The concept of human-environment systems is referenced extensively in this paper which warrants a brief explanation of its meaning. At a high-level of abstraction, there is no physical separation of ecosystems from socio-economic systems. Both contain dissipative structures in a stable state far from thermodynamic equilibrium [22]. They are both open systems that require steady energy and material gradients. In the case of ecosystems, the energy source is the sun. In the case of socioeconomic systems, energy sources range from wood and fossil fuels, to the kinetic energy of falling water, to the heat of fission and several technologies that are just over the horizon. Both systems consume energy-laden, low-entropy materials for self-maintenance and the production (or reproduction) of new material forms. They also excrete or exhale high-entropy heat and material. Both are self-regulating by different mechanisms – ecosystems by natural feedbacks (abiotic controls) and socio-economic systems by human institutions (markets, cultural norms and other social institutions). Material and energy exchange so freely between ecosystems and economic systems as to make boundaries between them indistinguishable except by convention (for example, the boundary between the market and non-market shown in Fig. 12.2). Hence, ecosystems provide services in the form of renewable natural resources (e.g., food, fiber, and esthetics) to economic systems. The point here is that the same laws that govern ecosystem dynamics operate as constraints3 on socioeconomic systems – the result is similarities of spatial organization between the two, as shall be argued below. It is neither useful nor productive to reduce ecosystems and economic systems into independent parts in models of global environmental change processes, which, fortunately, is well ordained in both the bottom-up and top-down modeling paradigms. Hereafter, the term human-environment system is used to infer spatio-temporal assemblages of ecosystems, and their abiotic controls (climate mostly), and socioeconomic systems that derive benefit from ecosystems. It has been suggested that the components of ecosystems and economic systems are structured hierarchically in space and time [23, 24, 25, 26, 27]. A hierarchy is a partially ordered set of objects ranked according to asymmetric relations among themselves [28]. Descriptors useful in distinguishing levels of a hierarchy include, for example, larger/smaller than, faster/slower than, to embed/to be embedded in, and to control/to be subject to control. In ecosystems, the behavior of lower levels in the hierarchy (e.g., individual organisms) is explained by biological mechanisms such as photosynthesis, respiration,
more recently, in systems ecology – especially the work of Levin, Holling and Clark. We were greatly influenced by a thoughtful review of emergent properties by Wiegleb and Bröring [21]. 3 We do not mean to argue that economic behavior is “determined” by thermodynamic laws in the same sense as ecosystem behavior, but rather that thermodynamic laws impose challenges to human ingenuity that force adaptive change.
268 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
and assimilation [29]. At higher levels, abiotic processes such as climate variability and biogeochemical cycling impose constraints on lower level biological mechanisms. In economic systems, the lower levels of the hierarchy are understood best in terms of rapidly changing production functions of individual firms. The higher levels impose constraints on individual firms in the form of slower moving nationally and internationally extensive features such as rates of inflation, prices, and national income [2].
Figure 12.2: The Human-Environment System (Source: after Ayres [22]).
Hierarchy theory evolved out of general systems thinking to explain the muli-tiered structure of certain types of production systems. The theory, in simplified terms, posits that the most useful way to deal with problems of global change in a multi-scaled complex system is to understand how the elements of the system behave at a single time-space level of scale [24]. That level (Figure 12.3, Level 0) will itself be a component of a higher level
SCALING IN INTEGRATED ASSESSMENT 269
(Level +1). Level +1 dynamics are generally slower moving and greater in extent than Level 0; they form boundary conditions that serve to constrain the behavior of Level 0. Level 0 may then be divided into constituent components at the next lower level (Level 1). Processes operating at Level 1 are generally faster moving and lesser in spatial extent than Level 0; they provide the mechanisms that regulate Level 0 behavior. They are represented as state variables (dynamic driving forces) in models of Level 0 [24]. Thus, the goal of hierarchy theory is to understand the behavior of complex systems by structuring models to capture dynamics at the next lower and higher scales of resolution. It provides a framework for testing for the property of emergence discussed in the several sections below.
Figure 12.3: Levels of a hierarchy (Source: after O’Neill [24])
Emergence: Real or Imagined? One of the more controversial concepts that came to prominence in the general systems theorizing of the 1960s was the notion of emergent properties. The essence of emergent properties is captured best in the psychologist Wundt’s famous quote: “the whole is greater than the sum of its parts”. Emergence literally is the process of coming into being. It suggests that the interaction of pattern and process at a smaller, faster scale produces a fundamentally new organization at a larger, slower scale [30]. It is described in several sciences including physics, chemistry, atmospheric sciences, economics, psychology and political science, but as a property of scale it receives special attention in ecosystem ecology [29, 31] where it is accepted, often uncritically, as an organizing principle of ecosystem form and function.
270 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
The notion of emergent properties of scale is important for a number of reasons. For one, some ecologists argue that emergent properties may serve as useful indicator variables for monitoring the stability and integrity of ecosystems in the face of rapid external forcing [25]. Management policies that manipulate emergent properties manipulate whole ecosystem behavior instead of fractions thereof. The same applies to human components of humanenvironment systems a priori. For two, the inclusion of emergent properties in modeling may reduce model uncertainty, which improves the anticipation of surprise (discussed below). There has been a long-standing debate in the systems analysis literature over the existence of emergent properties. Wiegleb and Bröring [21] classify the two poles in this debate as denial of emergence and ontological emergence. In the former, denial of emergence, there is no emergence. To recognize emergent properties is to concede defeat in one’s attempt to understand and model a system. The extreme version of this position is that everything in science is explained by the theory of quarks. Einstein, in his investigation of Brownian motion, asserted that we could predict the state of a system were we to know enough about the state of every molecule in the system – but he says in a footnote, “Dear reader, do not believe that you can do that”. We believe it is misleading to think that we might “correctly” model global scale biophysical and human response to climate change by simply aggregating fine-scale mechanistic explanation properly. It is similarly misleading to think that the reduction of all modeling to the finest level possible gives purely mechanical and, thus, reliable explanation. Levin [29] points out that at very fine spatial and temporal scales, stochastic phenomena or deterministically driven chaos make systems unpredictable, hence the replacement of classical mechanics by quantum mechanics at the smallest scales. Similarly, at the scale of individual human agents, behavior is not deterministic but rather stochastic. Were complex humanenvironment systems only to be understood in purely deterministic terms then a strong interpretation of Prigogine’s [32] analogy that we are all merely actors in the pages of a cosmic history book already written would apply. There is no wiggle-room for emergence in this view and we reject it out of hand from further consideration in this essay. At the latter pole – ontological emergence – the notion of emergence takes on a metaphysical dimension. Ontological differences between objects, literally differences between existence and nonexistence, are used to define the endpoints of emergence. To emerge is to come into being. Practically speaking, debates over ontological emergence focused on the precise conditions under which inanimate objects become animate ones. Vitalism, historically argued to be the almost magical or teleological emergence of life from the assemblage of cellular parts, was the object of much attention in biological sciences prior to the 20th century. Fundamental advances in cell biology have all but eradicated vitalism as a meaningful construct. It is now recognizable only in the psychology
SCALING IN INTEGRATED ASSESSMENT 271
literature where concepts of “soul” and “self” remain irreducible [21]. Lovelock’s [33] Gaia Hypothesis of a “secretly” self-regulating biosphere may be the quintessential example of ontological emergence on a grand scale: intriguing by circumstantial evidence but demanding of scientific blind faith to hold together. The concept of ontological emergence seems to have moved beyond all the scientific disciplines, save psychology and political science. It is now essentially a debate about ethics and human agency. Between the two poles of denial of emergence and ontological emergence lies the view of epistemological emergence. Epistemological emergence takes several forms according to Wiegleb and Bröring [21] but the most relevant form to this discussion is hierarchical (synchronous) emergence and its special cases of scale and model emergence. This form accepts the validity of emergence in principle but does not demand an explanation of ontological differences. Hierarchical emergence Hierarchical emergence is based on the presumption that the system of interest is structured hierarchically in time-space as per the above discussion. It can be thought of as the appearance of properties at a high-level of scale that is not derivable from the behavior of constituent (low-level of scale) components a priori [23]. Hierarchical emergence is the result of stochastic lower level interactions (elaborated below). It is high-level order emerging from low-level apparent disorder. Low-level disorder is more apparent than real because of interacting elements too complex and numerous to be practical to model deterministically. Emergent properties as such may constrain low-level interactions while themselves being buffered from the random upward pulses of change from lower levels of scale as long as the whole system remains in a steady state. Long-term commodity price trends in a market economy illustrate the point. They strongly regulate producer and consumer behavior while being largely unaffected in the long-term by short-term fluctuations in supply and demand. However, systems theory suggests that large upwelling singularities or bifurcations may disrupt these relations between levels of scale [25]. An example of such a singularity in an economic system is the tendency for the sudden appearance of a radical new technological innovation to reorder relations of production so as to disrupt the downward propagation of price signals [27]. We will return to this point below. Wiegleb and Bröring [21] note that shifts in scale by the observer/modeler may produce more than averages or constants. These shifts may make homogeneity out of heterogeneity and vice versa. They may bring order out of seeming disorder simply by magnifying or de-magnifying the resolution and extent of the data. This is scale emergence. In Levin’s [29] example of the unpredictable nature of fine-scale stochasticity in a system, an increase in level of scale may collect enough objects in the system to regularize their behavior to the point that statistical generalizations are possible.
272 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Figure 12.4: Cluster (square) aggregation method.
A related and somewhat obscure problem in geography and landscape ecology research is the “modifiable areal unit problem (MAUP)” [34, 35]. A shift in the sizes or shapes of the geographic units used to assemble data for modeling can in and of itself create homogeneity out of heterogeneity and vice versa. In principle, the MAUP is demonstrated most effectively with gridded data. We tested for the existence of the MAUP with a gridded data set used to simulate climate variability effects on crop yields in the southeastern USA. The EPIC process crop model was run with 36 years of observed climate and 2 × CO2 climate change (the climate changes were supplied by the CSIRO general circulation model described in Mearns [36]), management and environmental data allocated to a regular grid network consisting of 288 0.5° grid boxes imposed on the southeastern USA (Fig. 12.4). A series of maize yield simulations was performed at different levels of spatial aggregation of the input data. Two different aggregation strategies were used which simply altered the shapes of the aggregation units. One strategy aggregated all units
SCALING IN INTEGRATED ASSESSMENT 273
Figure 12.5: Linear aggregation method.
in square clusters illustrated in Figure 12.4a–e. The other strategy aggregated all units in linear clusters illustrated in Figure 12.5a–e. For both the square clustering strategy and the linear clustering strategy, Level 1 was the finest resolution with independent model simulations for each of the 288 grid boxes. At Level 2, independent simulations were run for 72 2 × 2 aggregates for the square clustering and 72 1 × 4 aggregates for the linear clustering. At Level 3, there were 18 4 × 4 and 2 × 8 aggregates. At Level 4 there were 2 12 × 12 and 6 × 24 aggregates. At Level 5 both aggregation strategies (cluster and linear) produced 1 12 × 24 aggregate. Yields at each of the levels were averaged across the network to create one average yield per level for purposes of comparison (Table 12.1). The temporal coefficients of variation of modeled yields at each level were handled similarly. At each level of aggregation a Student’s t-test was performed on the paired (linear vs. cluster) yields to determine whether there were statistically significant differences.
274 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Table 12.1: Southeastern USA simulated corn yield response to 1960–1995 observed climate and CSIRO climate change (2xCO2) at different levels and shapes of units of aggregation: yields averaged over time and aggregation units Climate
Level 1 (288) Cluster
Linear
Level 2 (72) Cluster
Linear
Level 3 (18) Cluster
Level 4 (2)
Level 5 (1)
Linear
Cluster
Linear
Cluster
Linear
Observed 6.29 6.29 6.28 6.23 6.00 6.15 CV .077 .077 .077 .077 .056 .048 6.19 6.19 6.00** 5.83** 5.78** 5.97** 2×CO2 CV .080 .080 .079 .083 .069 .056 ** = Statistically significant at 0.01 alpha level. Note - values in parentheses are number of aggregation units at a given level.
5.69 .049 5.65** .052
5.38 .042 5.71** .050
5.77** .044 5.84** .048
5.41** .038 5.38** .050
As can be seen in Table 12.1 the simple change in shape of the clustering of network components (grid cells) has minimal effects on mean yields at most levels although some of the differences were statistically significant in the climate change case. Level 5 yields show the greatest differences (both in the observed climate and climate change cases). At Level 5 the full effects of the different spatial paths the two aggregation strategies take with the input data are fully accumulated which probably explains the significance of the differences. We suggest, although do not test, that even these Level 5 yield differences would probably narrow in a Monte Carlo type design that considers several different aggregation strategies. Although examples in the literature [34, 35] do show that the MAUP can indeed be a significant influence at the aggregate level, the resulting emergent features probably represent the failure to account for hierarchical structures in the system rather than a meaningful high-level property of the system [35]. For example, a gridded aggregation scheme that happens upon the exact pattern that maximizes differences in climate between clusters might be expected to show substantial accumulated yield differences with respect to any other aggregation pattern because of the great importance of climate in distinguishing large-scale differences in crop productivity. This may explain why some of the differences between yields in the above example were significant at one level of aggregation but not at other levels. These simple explanations of scale-related emergence beg the question: Do they truly reveal underlying process or are they merely an aggregation sleight of hand with the data? This question raises an even more fundamental question that strikes at the heart of the meaning of epistemological emergence. Is an emergent property constituted of “new material” in the system or is it simply a relationship between the system and the Table 12.1 observer? To wit, is the “market” a tangible object that appears out of thin air or revelation that appears when the system is viewed in a certain space-time context? The answer to this last question is that properties “emerge” for a particular observer because he or she could not or did not predict their appearance because of lack of data or understanding or both [37]. That same property may be perfectly predictable to another observer. We assert that properties emerge at different levels of scale due to imperfections in how the observer/
SCALING IN INTEGRATED ASSESSMENT 275
modeler interprets the scales at which various driving forces of a system operate. Emergent properties appear when different objects of a system are brought into a logical context [21]. Emergent properties may also appear by accounting more fully for system complexity [23] (Wilbanks, personal communication). For example, additional data may greatly alter the understanding of relations among components of a non-linear system. Hence, complexity can be scaled as well as space and time. Moreover, several studies of hierarchical complex systems reviewed in a recent National Research Council report [38] conclude that complexity may be best understood at those portions of the scale that traverse the transition from deterministic to stochastic understanding. This transition tends to occur at meso-scales (regional) rather than macro-scales. Such scales are a natural focal point for increased modeling effort. It seems reasonable to conclude that emergent properties of scale are best articulated in terms of observer-system relations and not as “new material” in the system. The new material view casts us back into the muddled debate of ontological differences. Until fairly recently an underlying process-based justification of epistemological emergence, useful as guidance to the modeling of hierarchically structured human-environment systems, was lacking. Recent thinking about processes of self-organization and dissipative structures has been applied to hierarchy theory casting a new light on the framing of epistemological emergence. Self-Organization as a Dynamical Theoretical Basis for Scale-Related Emergence The development of a theoretical explanation for the existence of emergent properties of scale in human-environment systems requires the unraveling of the very meaning of complexity. Most simple systems consisting of a small number of elements can be understood structurally and modeled mechanistically (Fig. 12.6, Region I). They represent “organized simplicity.” Full description of a simple two-object system requires only four equations: one for each object to describe how the object behaves by itself (“isolated” behavior equation), one to describe how the behavior of each object affects that of the other (“interaction” equation) and one to consider how the system behaves absent the objects (“field” equation). As the number of objects increases, there is only one field equation and one isolated equation per object. The number of interaction equations, however, increases by the “square law of computation” (2n, where n is the number of objects). For example, 10 objects require 210 = 1,024 interaction equations. Complex human-environment systems consist of many times more than 10 objects. As noted above, human-environment systems are not purely deterministic at any level of scale. But it is possible to simulate generalized human bahaviour
276 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Figure 12.6: Complexity versus randomness.
at small scales, using agent-based modeling and other stochastic approaches, as a simple stochastic system with a finite number of possible outcomes. This is analogous to simulating organized simplicity in Figure 12.6. However, as the number of agents increases with scale, the complexity of interactions rises. A model that tracks every agent’s interactions with every other agent, and with the environment, rapidly eludes comprehension and computation even with massively parallel processing. Yet, at the extreme of large numbers of agents at low levels of spatial and temporal scales, the interactions within the population are random and therefore predictable in a statistical sense by their aggregation to high levels of scale (Fig. 12.6, Region II). In such populations, the “law of large numbers” dictates that the probability that a property of any one object in the population will deviate significantly from the average value of that property across all N objects is 1/√N. Hence, the larger the value of N, the more predictable the property becomes. According to Weinberg [37] such populations are complex but random (lacking structure) in their behavior such that they are regular enough to be studied statistically – they represent “unorganized complexity”. The problem with this typology is that most of the domain of humanenvironment systems lies between organized simplicity and unorganized complexity. It is the domain of “organized complexity” (Fig. 12.6, Region III). The understanding and modeling of land use change illustrates this problem. At low levels of scale, Turner and Meyer [39] posit that a wide range of social driving forces influence land use and land cover change, including economics, culture, location, politics, and environment. Change in the structure of familial inheritance of land may be as important as change in land rent in the determination of land use. Such features are embedded in highly reduced-form structures or totally absent in large-scale models of land use change. At high-levels of scale, Turner and Meyer [39] argue that Ehrlich and Holdren’s [40] IPAT relation – defined as: Intensity of human impact on the environment (I) = Population (P) × Affluence (A) × Technology (T) – usefully
SCALING IN INTEGRATED ASSESSMENT 277
explains large-scale patterns and trends of land use change. Elements of IPAT are easily identified in global IAMs such as the IMAGE 2.0 model [41] that simulates land use change as a function of change in agricultural demand, which is approximated by changes in population and per capita income. Turner and Meyer [39] imply, however, that the absence of low-level driving forces in most extant land use models results in potentially serious prediction errors at small scales. Systems of land use change that cross several space-time scales lay between the structure and precision of organized simplicity and the lack of structure and large aggregates of unorganized complexity. Too complex for analytical solution and too structured and organized for pure statistical treatment, this is the domain of complex human-environment systems and the logical focal point of integrated assessment models. Weinberg [37] refers to these as “medium number systems” subject to the law that large fluctuations, irregularities, and discrepancies will occur more or less regularly. We assert equivalence between medium number systems and the meso-scale of human-environment systems. This is the (regional) scale at which the modeling of complexity is most tractable a priori. The propensity for large fluctuations, irregularities and discrepancies is a necessary condition for self-organization, a feature of medium number systems that plays a central role in explaining the appearance of emergence in those systems. Buenstorf [27] describes self-organization as a dynamic process whereby structures and properties emerge at the system level out of intense interactions among system components. Normally self-organization is discussed in terms of physical systems. An example is the difficulty of upscale propagation of local governing equations of climate because extreme nonlinearity is encountered in the aggregation process [42]. For a system to exhibit self-organizing tendencies, it must receive steady inputs of energy and/or material (i.e. be far-from-thermodynamic equilibrium) and be subject to powerful positive and negative feedbacks across levels of scale that are spawned by nonlinear relations among components. At low-levels in the system, the sum total behavior of components exhibits large random fluctuation [27]. Buenstorf [27] citing Prigogine and Stenger [43] argues that positive feedback is necessary to amplify random fluctuation at low-levels of a system. Change in price signals (positive feedback) prompted by technological innovations (random fluctuation) is often given as a prime example of such in economic systems. The amplification of low-level random fluctuation results in the self-selection of high-level properties that constrain low-level behavior. This is a necessary condition for the emergence of high-level structure out of low-level randomness (stochasticity). Prigogine and Stengers [43] argue that negative feedbacks serve as system checks that help maintain the system structure. Furthermore, Holling [44] argues that self-organized hierarchical systems that are in a steady state are highly vulnerable to complete reorganization when subjected to strong external forcings (e.g., climate change).
278 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
From the practical standpoint of policy-motivated modeling, the failure to match temporal and spatial scales of human activities with those of nature has been an abiding problem in the science of climate and society interactions [4, 44]. This failure stems in part from the misinterpretation of modelers at the whole system level of the meaning of self-organizing pulses of upwelling change from finer scales in the system and the emergent structures that these pulses create. The challenge in applying concepts of self-organization to socioeconomic components embedded in human-environment systems is the identification of positive and negative feedbacks that give rise to emergent properties at high-levels of scale in a spatial hierarchy. An application to the problem of the vulnerability of the USA agricultural production system to climate change The co-evolution of national agricultural production systems with global climate change illustrates the problem of mismatches of scale. The expansion of global agricultural capacity apace with the expansion of demand is one of the great success stories of the 20th century. So successful have the combined outputs of national production systems been that real costs of production worldwide have declined causing real food prices to decline in turn for more than a generation [45]. This trend likely will continue into the first few decades of the 21st century. The consensus position is that the USA agricultural production system will be resilient in the face of climate change [46]. This is partly justified by historical experience of industrialized agricultural production in dealing successfully with challenges that are analogous to those posed by climate change – such as the historical success in stoking production to meet the challenge of feeding a growing and increasingly wealthy global population with surpluses to spare [47]. This position is backed by many global modeling studies (summarized in Adams et al. [48]). But is the system as resilient as we might think? A rough sketch of the vulnerability of the USA agricultural production system to climate change from a hierarchical systems perspective might suggest otherwise. Virtually any agricultural production system is an example of a complex human-environment system with scale-related emergent properties, but industrialized production systems even more so. It embeds a dissipative structure far-from-thermodynamic equilibrium in that large throughputs of low entropy energy (solar radiation and fossil fuel) and material (nutrients, seed, pesticides), plus labor, are required for maintenance and production. The system is hierar-chical in scale with individual farm enterprises that manage agroecosystems at the lowest levels of scale, a portfolio of agribusinesses and a network of regional and national institutions (e.g., cooperative research and extension, commodity crop boards) that nurture and constrain at the next level, and national and international markets that stabilize the system from their perch at the highest levels of scale.
SCALING IN INTEGRATED ASSESSMENT 279
Figure 12.7: Substitution of energy for labor in American agriculture in the 20th century.
The origins of the hierarchical structure of the contemporary USA agricultural production system are many but much can be traced back to a remarkable series of technological innovations that span over a century. Complexity has risen. Many of those innovations centered on the use of energy in production. Historically, anywhere in the world that the value of labor has risen relative to other production inputs, the substitution of relatively cheaper energy inputs for relatively more expensive labor has taken place [49]. Figure 12.7 shows the substitution of energy for labor inputs in USA agriculture in the 20th century. This substitution was enabled by technological innovations leading to labor saving mechanization. More recently in developed countries technical innovations have led to a second energy revolution in agricultural production. For the past two decades production has followed a trajectory of decreasing energy intensity measured by the proportion of energy input per unit output where output is either a unit of mass (yields or total production) or total value of production. Interestingly, these two energyrelated trends are analogous to the same transformations that take place as ecosystems self-organize into a stable state. Although normally evaluated by its large-scale effects on production, the drive for technological innovation is inescapably a localized process. Hayami and Ruttan [50] posit that technological innovation in agriculture is induced endogenously. According to Hayami and Ruttan’s “induced innovation hypothesis” [50], as factor scarcity arises, increasing factor prices signal it. As
280 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
those price increases persist, strong signals are conveyed to the agricultural research establishment to develop new technologies to substitute for more costly old ones in order to hold down costs of production. Because regional variations in resource endowments lead to regional differences in farmers’ comparative advantage, the pattern of induced innovation will be likewise regionally distributed [50]. Farmers at one location have quite different sets of technological needs than those at another location. The development of successful hybrid corn varieties illustrates the point. In the USA, each state land grant (agricultural) university has its own state corn breeder. The process of corn hybridization represents the “invention of a method of inventing” varieties adapted to each growing region [50]. That is, the successful development and diffusion of commercial hybrid corn varieties has been accomplished by the evolution of a complex research, development, distribution and educational system. This system has depended on close cooperation among public sector research and extension agencies, a series of public, semipublic and cooperative seed-producing organizations and private sector research and marketing agencies. The research that produces these innovations is conducted through a series of local and regional institutions such as agricultural universities and their research and extension stations and the various regional institutions of the Consultative Group on International Agricultural Research. Even the agricultural research efforts of the private sector are largely regionally distributed. As long as producer and factor prices do not exceed critical thresholds, only the steady stream of continual fine-tuning adjustments aimed at adapting cropping systems to challenges in their local production environments (i.e. spatial and temporal variability in pests, climate, soils) takes place. Occasionally, however, one particular technical innovation rises above the others in importance and may create a bifurcation in output amounts of sufficient magnitude as to prompt the emergence of new institutional structures which downwardly-regulate lower levels usually through government programs and prices [51]. The application of nitrogen fertilizers to corn and eventually to a wide range of row crops begun shortly after World War II in the USA, coupled with the hybridization of corn (described above), brought about a remarkable global upsurge in yields and production. From the whole system perspective, a major innovation such as this appears to be a fluctuation welling up randomly from many regions as the innovation rapidly diffuses to the farms comprising the low-level components of the spatial hierarchy. Collectively, these random fluctuations of induced technological innovation and the bifurcations of output they produce give rise to emergent, selfregulating (feedback) mechanisms at higher levels in the spatial hierarchy. In this sense the system is nonlinear. As noted above, one such self-regulating mechanism is price. But price is well represented in most agricultural impact assessment models and is sufficiently obvious as not to be very interesting here. Another, perhaps less obvious, self-regulating mechanism is the collective goals of society that find expression in national agricultural policies. In the
SCALING IN INTEGRATED ASSESSMENT 281
USA, the dominant goal of agricultural production policy is the stabilization of interannual agricultural output [47]. It is reasonable to conjecture that such a goal of stabilization emerged as a national scale policy out of concern over increases in the variability of crop yields that necessarily accompany technologically-driven increases in mean crop yields. A plethora of government programs carry out the goal of stabilization of agricultural output. It includes, for example, commodity crop insurance programs, the Conservation Reserve Program and numerous tax exemptions accorded uniquely to farmers. These programs represent strong positive feedbacks to local production. They encourage farmers to take on more climate risk than otherwise [52]. One program aimed at stabilization but that may, in the long run, increase the vulnerability of the USA agricultural system to climate change is that of government-guaranteed crop prices [53]. These price support programs stipulate that farmers must establish an average yield of a specific crop on a base acreage over a specified period of time (usually five years) in order to qualify for payments. While such a program encourages stability in the types of crops planted and lowers risk to farmers, it is a strong disincentive to flexible changes in the mix of crop species being planted by participating farmers. The net effect of such “safety net” programs is to encourage the expansion of high-revenue crops – often the most sensitive to climate variation – into climatically marginal areas for those crops and, as such, help dictate the spatial pattern of cropping systems. As climate changes and society absorbs the losses of farmers who continue to grow increasingly climate-inappropriate crops, the system actually becomes less stable or more vulnerable to major malfunction. At some point the climate changes will accumulate to where stabilization programs make no sense to society at large, resulting in abandonment and system-wide reorganization. In an ecosystems context, Holling [44] calls this a process of “creativedestruction” that accompanies his view of “nature evolving” (as opposed to “nature as equilibrium”). The same concept seems roughly to apply to the coevolving climate and agricultural production systems. What lessons for modeling can be drawn from this example? First and foremost, if appearances can be deceiving they will be when a complex human-environment system is poorly specified in a model. The probability of deception is directly related to the degree to which the system is nonlinear. If the system being modeled is hierarchically structured then principles of hierarchy theory should be applied. There is no one “correct scale” for the study of a hierarchical human-environment system and the choice of modeling scale in integrated assessment modeling has too often been an arbitrary one. O’Neill’s [24] recommendation that models of spatially hierarchical systems should include state variables from one level of scale below the level of interest and constraining variables from one level above should be followed. That is, models should be parameterized over long enough time scales to capture the evolution of self-organizing structures that span spatial
282 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
scales. Fine resolution (low-levels of scale), fast time-step state variables that capture stochastic processes such as the sudden (but predictable) appearance of innovation should be combined with coarse resolution (high-levels of scale) slow time-step variables that capture total system features such as markets and national and international production and trade policies. This type of modeling approach is likely to reveal the emergent structures of scale that feedback to constrain and stabilize low-level component dynamics. The incorporation of such modeling structures in integrated assessment modeling should allow the increased realism of estimates of whole system vulnerability to external shocks such as climate change to be achieved. Emergent properties, vulnerability and resilience of land use systems with environmental forcing: the case of Hurricane Mitch and Honduran agriculture Complex system theories as developed for the ecosystem might apply to the land-use system (see Loucks [54], Conway [55], and Fresco [56]). The validity of the assumptions on the constancy of the land-use system can be illustrated by drawing from the work of Holling [57, 58]. He proposed using two properties to describe the system’s reaction to a disturbance, resilience and stability. A system is stable, when after a temporal disturbance, it can return to the previous equilibrium, whereas resilience refers to the ability to absorb changes of state variables, and still persist after a disturbance. Figure 12.8 illustrates the concept of resilience [57]. Over time, connectedness builds as patterns of land use are locked in by the emergence of large-scale controls such as prices, infrastructure and government policy. The system becomes brittle and vulnerable to external forcing as in the case of an extreme climate event. When a hurricane strikes, stable land-use patterns will rapidly be changed, during the short disturbance phase. Stored capital is lost but the system’s complexity remains guided partly by persistent large-scale properties that emerged in the previous equilibrium phase such as price mechanisms and government programs (disaster relief). Loss of connected-ness (complexity) describes the reorganization phase that is initialized after-wards. Subsequently, the system returns to its former equilibrium, although key variables will certainly have changed, and a new (re)colonization phase will start, which will result in stable land-use patterns while capital and connectedness build up. Those patterns are not equal to the starting position; i.e. the land-use system is unstable, but driven by the same set of variables, and thus resilient. The properties of the land-use system and their relationship with ecosystem theories can be illustrated with results obtained from the application of a land-use change model called the CLUE modeling framework to Honduras, simulating effects of hurricane Mitch.
4 Reorganization
2 Stable patterns
1 Colonization
3 Disturbance
Little
STORED
CAPITAL
Much
SCALING IN INTEGRATED ASSESSMENT 283
Weak
CONNECTEDNESS
Strong
Figure 12.8: Four land-use system functions and the flow of events between them (Source: redrawn from Kok and Winograd [64] and adapted from Holling [57: fig. 23, p481]).
The CLUE modeling framework The CLUE (Conversion of Land Use and its Effects) modeling framework [59, 60, 61] is best described a dynamic, multi-scale land-use change model, that explores the spatially explicit effects of future land use changes, using scenarios. At the highest aggregation level (usually a country), yearly demand is calculated, based among others on expected changes in population, income, diet composition and export/import developments. Changes in demand are subsequently allocated in a two-step top-down procedure with an inter-mediate ‘optimal’ resolution, based on statistical parameters. The finest resolution is a rectangular grid, sized between 150 × 150 m and 15 × 15 km. Relationships between land-use types and a large set of potential land-use determinants are quantified using multiple regression techniques. Mitch scenario Within days after hurricane Mitch struck Central America on October 26th, 1998, the first images became available on the path of the hurricane, total rainfall, damaged roads and bridges etc. [62, 63], together with information on production losses of e.g., banana plantations (Internet, various sources). The speed with which data became available provided the opportunity to apply the CLUE modeling framework to Honduras and project the long-term impact of the hurricane. A detailed description of the assumptions of the scenario is given by Kok and Winograd [64]. Main assumptions include: the heavy rainfall that accompanies the hurricane temporarily excludes areas
284 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Mitch 1999
Base 1999
Mitch 2005
Base 2005
Figure 12.9: Short-term and long-term effects of hurricane Mitch on cover percentage of maize in Honduras. Depicted are modeled changes in cover between 1993 and 1999, one year after the hurricane (left), and between 1993 and 2005 (right), for the Base scenario (bottom) and Mitch scenario (top). Changes are classified in decreasing cover (white), increasing cover (dark gray and black), and no change (medium gray). Lines indicate the flooded area. Each grid cell is 7.5 × 7.5 km (Source: redrawn from Kok and Winograd [64]).
from production; a large number bridges and roads are destroyed; import and export are reduced; economic growth is depressed. The assumed lower income results in a lower demand for beef and thus agricultural area; export and import reductions have the same effect. In Figure 12.9, the short-term and long-term effects of hurricane Mitch on land-use patterns in Honduras are illustrated with hot-spots for maize. One year after the hurricane, land-use changes are clearly more dynamic in the hurricane scenario as compared to the Base scenario. The maize area has increased significantly in extent outside the area that was flooded, while landuse patterns in the base scenario are far more stable. Seven years later, however, effects of the hurricane have diminished. Although dynamics in the Mitch scenario are somewhat higher, the overall patterns of both scenarios are similar. The important lesson of this example is the powerful influence of largescale controls (national demand for agricultural land) on small-scale land use features (area planted to maize) during the reorganization of those features after external forcing (Hurricane Mitch). Although the specific patterns of land use are changed by the hurricane, the small-scale complexity embedded in relations between regional resource endowments and farming ingenuity and steered by aggregate national forces of demand for agricultural outputs recreate a landscape whose functions are similar to pre-disturbance conditions but are now better adapted to environmental conditions (maize is farther from floodprone areas).
SCALING IN INTEGRATED ASSESSMENT 285
Are Issues of Scale and Surprise Connected? Kates and Clark [65], summarizing the work of Holling [66], state that surprises occur when perceived reality departs sharply from expectations, when causes turn out to be different than was originally thought. Models often inform our expectations of the future. However, it is doubtful that even the best modeling strategies will accurately and precisely forecast surprise. To do so would require making tractable the ability to decrease fundamental uncertainty defined as a situation so novel that no current model of any kind applies [67]. But it is possible to decrease model uncertainty defined as the surprise that arises when model outcomes fail to predict actual events because of the way the observer/modeler connected the model’s variables together [67]. Kates and Clark [65] point out a number of techniques that are useful in anticipating surprise. They include historical retrodiction (learning from experience with past unexpected events), contrary assumptions (sensitivity analysis of assumptions underlying projections), asking experts their opinions, and imaging (an unlikely event is imagined requiring a plausible scenario to justify it to be constructed). Kates and Clark [65] also suggest models of system dynamics to anticipate surprise but no mention is made of the issue of scale in their essay. It would appear from the discussion in previous sections that there is, in fact, a strong connection between emergent properties of scale and surprise. Emergence is equated with surprise the first time it is discovered in the process of contemplating the additional complexity of a system. Afterward it may be demonstrated that the observer need not have been surprised at all once the system is better understood which, quoting Weinberg [37], “is a small consolation if the emergent property was an explosion.” Hence, some aspects of surprise may arise purely from our modeling mistakes. When things go wrong in a model, when linearity is assumed of a nonlinear system, for example, society is ripe for surprise. Mark Twain satirizes this point in Life on the Mississippi [68]: “In the space of one hundred and seventy six years the lower Mississippi has shortened itself two hundred and forty two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that…seven hundred and forty two years from now the Lower Mississippi will be only a mile and three quarters long…There is something fascinating about science. One gets such wholesome returns of conjecture out of such a trifling investment of fact.” The application of the principles of hierarchy theory and self-organization to modeling could improve our understanding and prediction of the conditions that produce surprise. In short, it could predict the potential for surprise.
286 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
Surprise detected in a model provides the opportunity not to be surprised in practice. It is particularly important that model structures be developed to relate small-scale stochastic processes to the dynamics of larger scale system features since those are the features that provide stabilizing feedbacks to the smallscale. Holling’s [44] hypothesis of encroaching system “brittleness” stemming from prolonged stabilizing feedbacks should be tested. Brittleness pre-conditions surprise. In the case of agricultural vulnerability to climate change, the potential seeds of surprise may be in the localized nature of endogenous technical change and how large-scale institutions emerge to stabilize the fruits of small-scale technical change. Stability in this case is gauged by the long-run dependability of yields of the highest paying (and most climate-sensitive) crops. As noted above, enforced stability breeds brittleness possibly setting up an unanticipated “climate surprise” that would even surprise the Intergovernmental Panel on Climate Change, which rates the probability that the global agricultural production system would be seriously hampered by climate change as medium to low. The foregoing is, of course, is pure conjecture not having done the necessary modeling, but it certainly is not implausible.
Conclusion A reasonable concluding question to ask is: To what extent have notions of scale emergence penetrated integrated assessment modeling of humanenvironment systems? In our view the answer is very little and superficial. A recent study by Darwin [17] illustrates the point. He noted major differences in the results of his model of the response of the global agriculture system to climate change depending on whether the model was run with the regions of the world disaggregated or with the regions aggregated to the global level. He clearly demonstrated the importance of scale resolution in IAMs and opened concern over what the underlying causes of the scale differences that he encountered were. This concern raises a serious question about the providence of projecting the model’s results onto the dynamic, hierarchically structured real world. This question applies to all of the IAMs that do not embed hierarchical structure. The ultimate worth of IAMs is the value of their predictions as usable knowledge to decision makers across a range of levels of spatial scale. IAMs must provide information to decision makers on levels of scale that concern them [17]. Cash and Moser [69] reiterate this point. Highly aggregated predictions of climate change impacts are little more than idle curiosities to local and regional decision makers. From the foregoing discussion we conclude that the problems of IAMs may extend well beyond the simple problem of matching the scale of aggregation
SCALING IN INTEGRATED ASSESSMENT 287
of results with the scale of the decision. We assert that the typical structure of current IAMs, whether bottom-up or top-down, does not anticipate emergent properties of scale. Having the ability to detect emergent properties is fundamentally necessary to the revelation of surprise and the further improvement of modeling. Most global IAMs are specified to represent structure and process at the highest level of the human-environment system hierarchy. Communication between levels of scale is primarily top-down (e.g., the determination of local land use change by change in global agricultural demand) with very few examples of process information being conveyed from lower levels to the top. In the few examples of IAMs that are bottom-up (e.g., Parry et al. [9], Rosenzweig and Parry [10]), pulses of information from low-levels to top levels are deterministic and feedbacks from the top levels (prices) are not explicitly coupled to low-level behavior. Root and Schneider’s [6] proposed “strategic cyclical scaling paradigm” (iterative scaling up and scaling down of models of different scales of a system) is praiseworthy as a start in bringing individually modeled components of the human-environment hierarchy together and testing for the existence of emergent properties. Techniques being developed to integrate variables simultaneously across levels of scale, such as multi-level modeling [70] accomplish the goal of strategic cycling scaling in a single model. Multi-level modeling potentially provides novel insight into the evolution of explicit small-scale process into regional patterns and then into large-scale emergent properties. The scope of integrated assessment modeling has grown enormously over the past half decade. It now embraces efforts ranging widely from modeling individual agent behavior at a small scale to modeling material, energy and economic exchanges through the biosphere and economy at a global scale. The arguments of this paper extend to all forms of integrated assessment modeling of problems embedded in systems that necessarily traverse more than one level of spatial and/or temporal scale. Finally, the time is at hand to take seriously the arguments of ecologists and systems theorists that not only does scale matter but that dealing with issues of scale explicitly is a fundamental requirement for modeling real world complexity. Absent a multi-scale structure, there is the strong possibility that IAMs are themselves doomed to be a source of surprise.
References 1.
2.
Kates, R. W., 1983. Part and apart: issues in humankind’s relationship to the natural world. In: F. K. Hare (ed.). The Experiment of Life: Science and Religion. Toronto: University of Toronto Press: 151–180. Gibson, C., E. Ostrom, and T. K. Ahn, 2000. “The concept of scale and the human dimensions of global change: a survey.” Ecological Economics, 32: 217–239.
288 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
3. 4. 5. 6. 7.
8.
9.
10. 11.
12. 13. 14.
15.
16.
17.
18.
Wilbanks, T. J., and R. W. Kates, 1999. “Global change in local places.” Climatic Change, 43: 601–628. Clark, W. C., 1985. “Scales of climate impacts.” Climatic Change, 7: 5–27. Harvey, L. D. D., 2000. “Upscaling in global change research.” Climatic Change, 44: 225–263. Root, T. L., and S. H. Schneider, 1995. “Ecology and climate: research strategies and implications.” Science, 269: 334–341. Rosenberg, N. J., (ed.), 1993. Towards an Integrated Impact Assessment of Climate Change: The MINK Study. Dordrecht, The Netherlands: Kluwer Academic. Parry, M. L., J. E. Hossell, P. J. Jones, T. Rehman, R. B. Tranter, J. S. Marsh, C. Rosenzweig, G. Fischer, I. G. Carson, and R. G. H. Bunce, 1996. “Integrating Global and Regional Analyses of the Effects of Climate Change: A Case Study of Land Use in England and Wales.” Climatic Change, 32: 185–198. Parry, M. L., C. Rosenzweig, A. Iglesias, G. Fischer, and M. Livermore, 1999. “Climate change and world food security: a new assessment.” Global Environmental Change, 9(Supplemental issue): 51–68. Rosenzweig, C., and M. Parry, 1994. “Potential Impact of Climate Change on World Food Supply.” Nature, 367: 133–138. Alcamo, G. J., J. Kreileman, J. S. Krol, and G. Zuidema, 1994. “Modelling the Global Society-Biosphere-Climate System: Part 1: Model Description and Testing.” Water, Air, and Soil Pollution, 76: 1–35. Dowlatadadi, H., 1995. “Integrated assessment models of climate change.” Energy Policy, 23(4): 289–296. Nordhaus, W., 1992. “An Optimal Transition Path for Controlling Greenhouse Gases.” Science, 258: 1315–1319. Edmonds, J. A., D. Barns, M. Wise, and M. Ton, 1995. “Carbon coalitions: the cost and effectiveness of energy agreements to alter trajectories of atmospheric carbon dioxide emissions.” Energy Policy, 23: 309–336. Schneider, S. H., W. E. Easterling, and L. O. Mearns, 2000. “Adaptation: Sensitivity to Natural Variability, Agent Assumptions, and Dynamic Climate Changes.” Climatic Change, 45: 203–221. Mendelsohn, R., W. Nordhaus, and D. Shaw, 1996. “Climate Impacts on Aggregate Farm Value: Accounting for Adaptation.” in: W. E. Easterling (guest ed.). Agricultural and Forest Meteorology, Special Issue on Adapting North American Agriculture to Climate Change, Vol. 80: 55–66. Darwin, R., 1999. “A farmer’s view of the Ricardian approach to measuring agricultural effects of climatic change.” Climatic Change, 41: 371–411. Mearns, L. O., T. Mavromatis, E. Tsvetsinskaya, C. Hays, and W. Easterling, 1998. “Comparative Response of EPIC and CERES
SCALING IN INTEGRATED ASSESSMENT 289
19.
20. 21. 22.
23. 24.
25. 26.
27.
28.
29. 30.
31.
32. 33. 34.
Crop Models to High and Low Resolution Climate Change Scenarios.” Journal of Geophysical Research, Vol. 104, No. D6, 6623–6646. Easterling, W. E., 1997. “Why regional studies are needed in the development of full-scale integrated assessment modeling of global change processes.” Global Environmental Change, 7: 337–356. Gallegher, R., and T. Appenzeller, 1999. “Beyond reductionism.” Science, 284: 79. Wiegleb, G., and U. Bröring, 1996. “The position of epistemological emgentism in ecology.” Senckenbergiana Maritima, 27: 179–193. Ayres, R. U., 1994. Industrial metabolism: theory and policy. In: B. R. Allenby and D. J. Richards (eds.). The Greening of Industrial Ecosystems. Washington, DC: National Academy Press: 23–37. Allen, T. F. H., and T. B. Starr, 1982. Hierarchy–perspectives for ecological complexity. Chicago: University of Chicago Press: 310 pp. O’Neil, R. V., 1988. Hierarchy theory and global change. In: T. Roswall, R. G. Woodmansee, and P. G. Risser (eds.). Scales and Global Change. SCOPE 35, Chichester: John Wiley: 29–45. Müller, F., 1996. “Emergent properties of ecosystems: consequences of self-organizing processes?” Senckenbergiana Maritima, 27: 151–168. Perrings, C., 1998. ‘Resilience in the dynamics of economyenvironment systems.” Environmental and Resource Economics, 11: 503–520. Buenstorf, G., 2000. “Self-organization and sustainability: energetics of evolution and implications for ecological economics.” Ecological Economics, 33:119–134. Shugart, H. H., and D. L. Urban, 1988. Scale, synthesis, and ecosystem dynamics. In: L. R. Pomeroy, J. J. Alberts (eds.). Concepts in Ecosystems Ecology. Springer-Verlag 279–289. Levin, S. A., 1992. :The problem of pattern and scale in ecology. Ecology, 73:1943–1967. Peterson, G. D., 2000. “Scaling ecological dynamics: self-organization, hierarchical structure and ecological resilience.” Climatic Change, 44: 291–309. Pomeroy, L. R., E. C. Hargrove, and J. J. Alberts, 1988. The ecosystem perspective. In: Concepts in Ecosystem Ecology, Ecological Studies 67, Springer-Verlag: 1–17. Prigogine, I., 1985. The rediscovery of time. In: S. Nash (ed.). Science and Complexity. London: Science Reviews Ltd: 23. Lovelock, J. E., 1972. “Gaia as seen through the atmosphere.” Atmospheric Environment, 6: 579–580. Openshaw, S., and P. J. Taylor, 1981. The modifiable areal unit problem. In: Quantitative geography: a British view. Routledge & Kegan Paul: 60–69.
290 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
35. Jelinski, D. E., and J. Wu, 1996. “The modifiable areal unit problem and implications for landscape ecology.” Landscape Ecology, 11: 129–140. 36. Mearns, L. O., W. Easterling, and C. Hays, 2001. “Comparison of Agricultural Impacts of Climate Change Calculated from High and Low Resolution Climate Model Scenarios: Part I. The Uncertainty due to Spatial Scale.” Climatic Change, 51: 131–172. 37. Weinberg, G., 1975. An Introduction to General Systems Thinking. New York: Wiley. 38. National Research Council, 1999. Global Environmental Change: Research Pathways for the Next Decade. Committee on Global Change Research. Washington: National Academy Press: 616 pp. 39. Turner, B. L., II and W. B. Meyer, 1991. Land use and land cover in global environmental change: considerations for study. ISSG 130: 669–679. 40. Ehrlich, P., and J. Holdren, 1971. “Impact of population growth.” Science, 171: 1212–1217. 41. Leemans, R., and G. J. van den Born, 1994. “Determing the potential distribution of vegetation, crops and agricultural productivity.” Water, Air, and Soil Pollution, 76: 133–161. 42. Lorenz, E. N., 1964. “The problem of deducing climate from the local governing equations.” Tellus, 16: 1–11. 43. Prigogine, I., and I. Stengers, 1984. Order out of chaos. Boulder, Colorado: New Science Library. 44. Holling, C. S., 1994. “Simplifying the complex: the paradigms of ecological function and structure.” Futures, 24: 598–609. 45. FAO, 1995. World Agriculture: Towards 2010. N. Alexandratos (ed.). John Wiley and Sons, West Sussex, 14–23. 46. IPCC, 2001. Climate Change 2001: Impacts Adaptation, and Vulnerability. Cambridge, UK: Cambridge University Press. 47. Easterling, W. E., 1996. “Adapting North American agriculture to climate change in review.” Agricultural and Forest Meteorology, 80: 1–56. 48. Adams, R. M., B. H. Hurd, S. Lenhart, and N. Leary, 1999. “Effects of global climate change on agriculture: An interpretative review.” Climate Research, 11, no.1: 19–30. 49. Heady, E. O., 1984. Economic impacts of energy prices on agriculture. In: G. Stanhill (ed.). Energy and Agriculture. New York: Springer-Verlag; 10–23. 50. Hayami, Y., and V. Ruttan, 1985. Agricultural Development: An Agricultural Perspective. Baltimore: The Johns Hopkins University Press. 51. Witt, U., 1997. “Self-organization and economics – what is new?” Structural Change Economic Dynamics, 8: 489–507. 52. Gardner, B. L., R. Just, R. Kramer, and R. Pope, 1984. Agricultural policy and risk. In: P. J. Barry (ed.). Risk Management in Agriculture. Ames: Iowa State University Press: 231–261.
SCALING IN INTEGRATED ASSESSMENT 291
53. Lewandrowski, J., and R. Brazee, 1992. Government farm programs and climate change: a first look. In: J. Reilly and M. Anderson (eds.). Economic Issues in Global Climate Change: Agriculture, Forestry, and Natural Resources. Boulder: Westview Press: 132–147. 54. Loucks, O. L., 1977. “Emergence of research on agro-ecosystems.” Annual Review of Ecology and Systematics, 8: 173–192. 55. Conway, G. R., 1987. “The properties of agroecosystems.” Agricultural Systems, 24: 95–117. 56. Fresco, L. O., 1995. Agro-ecological knowledge at different scales. In: J. Bouma, A. Kuyvenhoven, B. A. M. Bouman, J. C. Luyten, H. G. Zandstra (eds.). Eco-regional approaches for sustainable land use and food production. Dordrecht: Kluwer Academic Publishers: 133–141. 57. Holling, C. S., 1992. “Cross-scale morphology, geometry, and dynamics of ecosystems.” Ecological Monographs, 62: 447–502. 58. Holling, C. S., D. W. Schindler, B. Walker, and J. Roughgarden, 1995. Biodiversity in the functioning of ecosystems: an ecological synthesis. In: C. Perrings, K-G. Maeler, C. Folke, C. S. Holling, and B-O. Jansson (eds.). Biodiversity loss: economic and ecological issues. Cambridge: Cambridge University Press: 44–83. 59. Kok, K., A. Farrow, A. Veldkamp, and P. H. Verburg, 2001. “A method and application of multi-scale validation in spatial land use models.” Agriculture, Ecosystems and Environment, 85: 223–238. 60. Veldkamp, A., and L. O. Fresco, 1996. “CLUE-CR: an integrated multi-scale model to simulate land use change scenarios in Costa Rica.” Ecological Modelling, 91: 231–248. 61. Verburg, P. H., G. H. J. De Koning, K. Kok, A. Veldkamp, and J. Bouma, 1999a. “A spatial explicit allocation procedure for modelling the pattern of land use change based upon actual land use.” Ecological Modelling, 116: 45–61. 62. CIAT, 1998. The Honduras database. Collected in the framework of a project entitled ‘Methodologies for integrating data across geographical scales in a data rich environment. Examples from Honduras’. Centro Internacional de Agricultura Tropical, Cali. 63. CINDI, 1998. Central American Disaster Atlas. Centre for Integration of Natural Disaster Information, USGS. http://cindi.usgs.gov/ events/mitch/atlas/index.html or through CINDI homepage http:// cindi.usgs.gov/. 64. Kok, K., and M. Winograd, 2002. “Modelling land-use change for Central America, with special reference to the impact of hurricane Mitch.” Ecological Modelling, 149: 53–69. 65. Kates, R. W., and W. C. Clark, 1996. “Expecting the unexpected.” Environment, 38: 6–11, 28–34. 66. Holling, C. S., 1986. The resilience of terrestrial ecosystems: local surprise and global change. In: W. C. Clark and R. E. Munn (eds.).
292 EMERGENT PROPERTIES OF SCALE IN GLOBAL ENVIRONMETAL MODELING
67. 68. 69.
70.
Sustainable Development of the Biosphere. Cambridge: Cambridge University Press: 292–317. Sendzimir, J., S. Light, and K. Szymanowka, 1999. “Adaptively understanding and managing for floods.” Environments, 27: 115–136. Twain, M, 1883. Life on the Mississippi. J. R. Boston, Osgood and Company. Cash, D. W., and S. C. Moser, 2000. “Liking global and local scales: designing dynamic assessment and management processes.” Global Environmental Change, 10: 109–120. Polsky, C., and W. E. Easterling, 2001. “Adaptation To Climate Variability and Change in the US Great Plains: A Multi-Scale Analysis of Ricardian Climate Sensitivities.” Agriculture, Ecosystems, and Environment, 85: 133–144.
13 Complexity and Scales: the Challenge for Integrated Assessment MARIO GIAMPIETRO National Institute of Research on Food and Nutrition (INRAN), Unit of Technological Assessment, Rome, Italy
Abstract “Complexity”, in science, can be linked to the need of using, in parallel, not reducible models (= the coexistence of non-equivalent descriptive domains required) for a useful representation of a certain phenomenon. This is always the case, when dealing with: (1) nested hierarchical systems (= in which relevant patterns are detectable only on different space-time scales); and (2) socioeconomic systems (= in which agents are not only non-equivalent observers, but also reflexive). Keywords: Hierarchy Theory, Complexity, Scaling, Integrated Assessment, Non-equivalent descriptive domains, Post-Normal Science.
Acknowledgments I would like to thank Matthias Lüdeke, Kozo Mayumi, Jerome Ravetz for their valuable comments to previous versions of this paper, as well as the participants of the Matrix workshop on Scaling Issues in Integrated Assess-ment for their useful comments to my presentation. Special thanks are also due to Frank Nelissen for his outstanding job of editing the original manuscript.
Introduction – The Epistemological Dimension of Complexity An intriguing definition of “complexity”, given by Rosen [1: p229], can be used to introduce the topic of this paper: “a complex system is one which allows us to discern many subsystems ... (a subsystem is the description of the system determined by a particular choice of mapping only a certain set of its
294 COMPLEXITY AND SCALES
qualities/properties) ... depending entirely on how we choose to interact with the system”. Two important points in this quote are: (1) the concept of “complexity” is a property of the appraisal process rather than a property inherent to the system itself. That is, Rosen points at an epistemological dimension of the concept of complexity, which is related to the unavoidable existence of different relevant “perspectives” (= relevant attributes in the language of integrated assessment) that can not be all mapped at the same time by a unique modeling relation. (2) models can see only a part of the reality, that part the modeler is interested in. Put it in another way, any scientific representation of a complex system is reflecting only a sub-set of our possible relations (potential interactions) with it. “A stone can be a simple system for a person kicking it when walking in the road, but at the same time be an extremely complex system for a geologist examining it during an investigation of a mineral site” [1]. This implies that when using formal systems of inference we should always be aware that the equation of perfect gas (PV = nRT) can say a lot about some properties of gases, but it does not say anything about how they smell. Smell can be a non-relevant system quality (attribute) for an engineer calculating the range of stability of a container under pressure. On the other hand, it could be a very relevant system quality for a chemist doing an analysis or a household living close to the chemical plant. The unavoidable existence of non-equivalent views about what should be the set of “relevant qualities” to be considered when modeling a natural system, is a crucial point in the discussion of science for sustainability. In fact, scientific tools that proved to very useful in the past – e.g., reductionist analyses, which were able to send a few humans on the moon – will not necessarily be also adequate to provide all the answers to new concerns expressed today by humankind – e.g., how to sustain a decent life of 10 billion humans on this planet. When discussing sustainability we are dealing with issues where: (1) large levels of uncertainty are affecting the modeling of the various dynamics of interest and (2) different but legitimate perspectives on what is relevant and what is “better” can be found among the stakeholders. Under these conditions it is very unlikely that reductionist analyses can be used to indicate “the best” possible course of action (For whom? On which hierarchical level? For how long? How to be sure that the predictions are right?). Another interesting way to point at the deep epistemological implications of complexity in relation to scale has been given by Mandelbrot [2] in his seminal paper in Science “How long is the coast of Britain?”. His provocative statement was that it is impossible to measure the length of the coast line of Britain, without specifying first the scale of the map that will be used for representing it. The more detailed is the map, the longer will result the assessment of the same segment of coast. This implies that, in last analysis, the numerical assessment of the length of a given segment of coast will be affected by the choice of the map used for the assessment. Obviously, this pre-analytical choice will depend on why the analysis is done in the first
SCALING IN INTEGRATED ASSESSMENT 295
place. Mandelbrot conclusion is that, when dealing with fractal objects (and as argued later on in this paper, the same applies to nested hierarchical systems) one deals with objects that do not have a “clear cut identity”. When characterizing them with numerical variables, the numerical assessment will always reflect not only their intrinsic characteristics (the “real length” of the coastline?) but also the goals (interests and beliefs) of the analysts reflected by the “arbitrary” selection of a mapping procedure used for the description of the object. “Epistemological complexity” is in play every time the interests of the observer (the goal of the mapping) are affecting what the observer sees (the formalization of a scientific problem and the resulting model). That is, when pre-analytical steps (= (1) the choice of the “space-time scale” at which the reality should be observed and (2) the previous definition of what should be considered as “the system of interest” in relation to a given selection of encoding variables) are affecting the resulting numerical representation of system’s qualities. If we agree with this definition, we have to face the obvious fact that, basically, any scientific analysis of sustainability is affected by such a predicament. Modern developments in physics (quantum theory) proved that even the most simple equations and laws of mechanics, validated by many successful applications in the last hundreds of years, remain valid only under a certain set of assumptions (only within a certain range of spacetime windows at which they can be applied). As soon as we try to stretch them across too many scales they get in trouble. In spite of this basic problem, there are a lot of applications of reductionist scientific analysis in which the problems implied by “epistemological complexity” can be ignored. These are cases in which the particular relation between “observer” and “observed” can be neglected without losing general validity for the relative numerical assessments. This requires an agreement without reservations among the various stakeholders that will use the scientific output on: (1) the choice of a “space-time scale” at which the reality should be observed (e.g., when adopting a “ceteris paribus” description, the system is not “becoming” something else at a speed which would require a complementing evolutionary analysis) and (2) a previous definition of what should be considered as “the system of interest” (e.g., what are the relevant qualities to be considered in the model). Put in another way, reductionist science works well in all cases in which power is effective for ignoring or suppressing legitimate but contrasting views on the validity of the pre-analytical problem structuring within the population of “users” of scientific information (Jerome Ravetz, personal communication). The text of this paper is divided into 2 parts. Part 1 presents general concepts emerging in the field of complexity which are related to the concept of hierarchical systems and scaling: (1) Holons and holarchies (related to the special nature of “adaptive nested hierarchical systems”). (2) “Non-Equivalent descriptive domains” (why we need to use in
296 COMPLEXITY AND SCALES
parallel different models). (3) “non-reducibility” and “incommensurability” of indicators obtained when using models belonging to non-equivalent descriptive domains (why we need to move to multicriteria analysis). Part 2 deals with the practical implications of the set of concepts discussed in Part1. In particular it deals with: (1) The root of the epistemological predicament of sustainability. Describing the sustainability issue in scientific terms requires compressing an infinite amount of information (that would be required to describe the various trade-offs reflecting different perspectives and different “qualities” of the reality on different scales) into a finite information space (that used in problem structuring and decision making in a finite time). This “mission impossible” requires a new paradigm for science for sustainability (Post-Normal Science). (2) The need for a different conceptualization of “sustainable development”. We should move (as suggested by Herbert Simon [3]) from the paradigm of “substantial rationality” to that of “procedural rationality”. That is, IF we acknowledge that: (a) uncertainty and ignorance are unavoidably linked to our scientific representation of sustainability trade-offs; and (b) incommensurability among the relative indicators of performance is entailed by the existence of different “value systems” found among the stakeholders; THEN the only option left is to look for a participatory procedure of decision making based on an iterative process of problem structuring and “value judgement”. This procedure is aimed at a social negotiation of satisficing solutions (using again a term proposed by Herbert Simon [4]) rather than the computation of optimal solutions. Within this new context, scientists should try to help the society in doing this transition rather than represent an obstacle.
PART 1 – Holarchies, Non-Equivalent Descriptive Domains, and Non-Reducible Assessments Self-organizing systems are made of nested hierarchies and therefore entail non-equivalent descriptive domains All natural systems of interest for sustainability (e.g., complex biogeochemical cycles, ecological systems and human systems when analyzed at different levels of organization and scales above the molecular one) are “dissipative systems” [5, 6, 7]. That is they are self-organizing, open systems, away from thermodynamic equilibrium. Because of this they are necessarily “becoming systems” [8], that in turn implies that they: (i) are operating in parallel on several hierarchical levels (where patterns of self-organization can be detected only by adopting different space-time windows of observation); and (ii) will change their identity in time. Put it in another way, the very concept of self-organization in dissipative systems (the essence of living and evolving systems) is deeply linked to the idea of: (1) parallel levels of organization on different space-time scales; and (2) evolution (which implies that the identity of the state space, required to describe their behaviour in a useful way, is changing in time).
SCALING IN INTEGRATED ASSESSMENT 297
Actually the idea of parallel levels of organization is directly linked to the definition of hierarchical systems given by O’ Neill [9]: a dissipative system is hierarchical when it operates on multiple spatio-temporal scales – that is when different process rates are found in the system. Another useful definition of hierarchical systems referring to their analysis is: “systems are hierarchical when they are analyzable into successive sets of subsystems” [10: p468] – in this case we can consider them as near-decomposable. Finally a definition of hierarchical systems more related to the epistemological dimension: “a system is hierarchical when alternative methods of description exist for the same system” [11]. The existence of different levels and scales at which a hierarchical system is operating implies the unavoidable existence of non-equivalent ways of describing it. For example (Fig. 13.1), we can describe a human being at the microscopic level to study the process of digestion of nutrients within her/his body. When we look at a human being at the scale related to the level of an intestine cell we can even take a picture of it with a microscope (Fig. 13.1A). However, this type of description is not compatible with the description which would be required to catch the quality “face” of the same human being (e.g., needed when applying for a driving license), the one given in Figure 13.1B. No matter how many pictures we will take with a microscope of a defined human being, the type of “pattern recognition” of that person which refers to the cell level (obtained at its relative space-time window with a microscope) is not equivalent to the description of human beings (“pattern recognition”) required to catch the quality “face”. The ability to detect the identity of the face of a given person, in fact, is therefore an “emergent property” linked to: (I) the choice of a certain spacetime window for looking at the system and (II) the choice of a given system of mapping system qualities (in this case our pattern recognition is based on using light at the wave length typical of human vision). The face presented in Figure 13.1B cannot be detected, when adopting a description linked to a different space-time window (either that of an individual cell – Fig. 13.1A – or a very large scale adopted by someone looking at the social interaction of our person – Fig. 13.1C). The same face cannot be detected either, if we look at the same head, but using X-rays (as done in the example given in Fig. 13.1D) – which is a different mechanism for mapping system’s characteristics. In conclusion, in Figure 13.1 we have 4 different examples of “pattern recognition” which, in a way, are reflecting the existence of “previous goals” for the analyst. That is, the pattern presented in Figure 13.1A – reflects the goal of studying the functioning of digestive cells. The pattern presented in Figure 13.1B – reflects the goal of identify the face of the person. The pattern presented in Figure 13.1C – reflects the goal of studying the social relation of the person. The pattern presented in Figure 13.1D – reflects the goal of performing a medical check on the selected person. Any recognized pattern, is not only reflecting some of the characteristics of the observed system (since in any given person there are a virtually infinite number of patterns overlapping
298 COMPLEXITY AND SCALES
across scales waiting for being recognized), but also the relation that the observed system has with the observer.
Assessment (1) = 116 kg/year per capita
Assessment (2) = 1,015 kg/year per capita
Assessment (4) = 345 kg/year per capita
Assessment (3) = 1,330 kg/year per capita
Figure 13.1: Non-equivalent descriptive domains needed to obtain non-equivalent pattern recognition in nested hierarchical systems.
Human societies and ecosystems are generated by processes operating on several hierarchical levels over a cascade of different scales. Therefore, they are perfect examples of nested dissipative hierarchical systems that require a plurality of non-equivalent descriptions to be used in parallel in order to analyze their relevant features in relation to sustainability [12, 13, 14, 15]. Defining a descriptive domain Using the rationale proposed by Kampis [16: p70] we can define a system as “the domain of reality delimited by interactions of interest”. In this way one can introduce the concept of “descriptive domain” in relation to the analysis of a system organized on nested hierarchical levels. A descriptive domain is the representation of a domain of reality which has been individuated based
SCALING IN INTEGRATED ASSESSMENT 299
on a pre-analytical decision on how to describe the identity of the investigate system in relation to the goals of the analysis. Such a preliminary and “arbitrary” choice is needed in order to be able to detect patterns (when looking at the reality) and model the behavior of interest (when representing it). In fact, any scientific representation is then based on: (i) a set of encoding variables (reflecting a selection of observable qualities, considered relevant); (ii) a defined space-time horizon for the behavior of interests (which is determined by the space-time differential most appropriate to investigate the causal relations of interest). (iii) a dynamic generated by an inferential system applied to the set of variables (within the state space used for the representation). (iv) a boundary (linked to the given time horizon) for the investigated system. The definition of a boundary finally completes the “identity” of the modeled system as an entity separated from its environment. The scientific representation is often used to simulate with a formal system of inference the perception of relevant patterns (the behavior of interest) at a particular hierarchical level (on a certain scale). To discuss of the need of using in parallel non-equivalent descriptive domains we can use again the 4 views given in Figure 13.1 applying to them the metaphor of sustainability. Let’s imagine that the 4 non-equivalent descriptions presented in Figure 13.1 were referring to a country (e.g., the Netherlands) rather than to a person. In this case, we can easily see how any analysis of its sustainability requires an integrated use of these different descriptive domains. For example, by looking at socioeconomic indicators of development (Fig. 13.1B) we “see” this country as a beautiful woman (i.e. good levels of GNP, good indicators of equity and social progress). These are good system’s qualities, required to keep low the stress on social processes. However, if we look at the same system (same boundary), but using different encoding variables (e.g., biophysical variables) – Figure 13.1D in the metaphor – we can see the existence of a few problems not detected by the previous selection of variables (i.e. a sinusitis and a few dental troubles in the real picture). In the metaphor this picture can be interpreted, for The Netherlands, as an assessment of accumulation of excess of nitrogen in the water table, growing pollution in the environment, excessive dependency on fossil energy and dependence on imported resources for the agricultural sector. Put in another way, when considering the biophysical dimension of sustainability we can “see” some bad system’s qualities, which were ignored by the previous selection of economic encoding variables. Comparing Figure 13.1B and Figure 13.1D we can see that even while maintaining the same physical boundary for the system (looking at the same head) a different selection of encoding variables can generate a different assessment of the performance of the system. Things become much more difficult when we are forced to use also other assessments of performance, which must be referred to descriptive domains based on different space-time differentials. For example, Figure 13.1A is an analysis related to lower levels components of the system (= which require for their description a different space-time scale). In the Dutch metaphor, this
300 COMPLEXITY AND SCALES
could be an analysis of technical coefficients (e.g., input/output) of individual economic activities (e.g., the CO2 emissions for producing electricity in a power plant). Clearly, this knowledge is crucial to determine the viability and sustainability of the whole system (= the possibility to improve or to adjust the overall performance of Dutch economic process if and when changes are required). In the same way, an analysis of the relations of the system with its larger context can imply the need of considering a descriptive domain based on pattern recognition referring to a larger space-time domain (Fig. 13.1C). In the Dutch metaphor this could be an analysis of institutional settings, historical entailments, or cultural constraints over possible evolutionary trajectories. Holons, holarchies and near-decomposability of hierarchical systems Each component of a dissipative nested hierarchical system may be called a ‘holon’, a term introduced by Koestler [17, 18, 19] to stress its double nature of “whole” and “part” of elements of these systems (for a discussion of this concept within hierarchy theory see also Allen and Starr [20: p8–16]). A holon is a whole made of smaller parts (e.g., a human being made of organs, tissues, cells, atoms) and at the same time it is a part of a larger whole (an individual human being is a part of a household, a community, a country, the global economy). Elements of nested hierarchical systems have an implicit duality: (1) holons have their own composite “organized structure” at the focal level (they represent “emergent properties” generated by the organization of their lower level components within a given associative context). On the other hand, when interacting with the rest of the hierarchy, (2) holons perform “relational functions” that contribute to a different set of “emergent properties” expressed at a higher level of analysis (they are in turn just components of another higher level holon to which they belong). When dealing with these entities we face a standard epistemological problem. The space-time domain which has to be adopted for characterizing their “relational functions” – when considering higher-level perception/description of events – does not coincide with the space-time domain which has to be adopted for characterizing their “organized structure” (when considering lower-level perception/description of events). For example, when using the word “dog” we refer to any individual organism belonging to the species “canis familiaris”. The characterization of the holon “dog” however, refers to the set of relational functions (the niche of that species) expressed by members of an equivalence class (the organisms belonging to that species). This means that when using the word “dog” we loosely refer both to the characteristics of the niche occupied by the species in the ecosystem and to the characteristics of any individual organism belonging to it (including the dog of our neighbor). Every “dog”, in fact, belongs to an equivalence class (the species “canis familiaris”) even though, each particular individual, has some “special” characteristics (e.g., generated by stochastic events of its personal history) which make it unique. That is,
SCALING IN INTEGRATED ASSESSMENT 301
any particular organized structure (the dog of the neighbor) can be identified as different from other members of the same class, but at the same time, it must be a legitimate member of the class. Another example of holon, this time taken from social systems, could be the President of the USA. In this case Mr. Clinton is the lower level “organized structure” that has been the “incumbent” in the “role” of President of the USA for the last 8 years. Any individual human being has a time closure within this social function – under existing US constitution – of a maximum of 8 years (two 4-year terms). Whereas the US Presidency, as a social function, has a time horizon in the order of centuries. In spite of this fact, when we refer to the ‘President of the USA’ we loosely address the concept of such a holon, without making a distinction between the role (social function) and the incumbent (organized structure) performing it. The confusion is increased by the fact, that you cannot have an operational U.S. President without the joint existence of: (1) a valid role (institutional settings) and (2) a valid incumbent (person with appropriate socio-political characteristics, verified in the election process). On the other hand, the existence and the identity of Mr. Clinton as an organized structure (e.g., a human being) able to perform the specified function of ‘US president’ is totally logically independent (when coming to representation of its physiological characteristics as human being) from the existence and the identity of the role of the Presidency of the USA (when coming to representation of its characteristics as social institution) and viceversa. Human beings were present in America well before the writing of US constitution. In the previous section I used different words for two similar concepts: “organized structure” and “relational function” are terms proposed by Herbert Simon [10] to describe in general terms the structure of complex systems. Whereas, “role” and “incumbent” are terms proposed by Kenneth Bailey [21] to be used when dealing with human societies. Salthe [22] suggests a similar selection of terms: “individuals” (as equivalent of “organized structures” or “incumbents”) and “types” (as equivalent of “relational functions” or “roles”). Finally, Rosen [23] proposes, within a general theory of modeling relation, a more drastic distinction. He suggests to make a distinction between: “natural systems” (which are always “special” and which cannot be fully described by any scientific representation due to their intrinsic complexity) and “epistemological categories” (definition of equivalence classes used to represent elements of the reality). The use of epistemological categories makes possible a compression in the demand of computational capability when representing the reality (e.g., say “dog” and you include them all). But this implies generating a loss of 1 to 1 mapping (this implies confusing the identities of the individual members of equivalence classes). The logical similarity between the various couplets of terms is quite evident. A nested hierarchy of dissipative systems (a hierarchical system made of holons) can be called holarchy [18: p102]. Gibson et al. [24]
302 COMPLEXITY AND SCALES
call these systems “Constitutive Hierarchies” following the suggestion of Mayr [25]. Another way of looking at the root of the epistemological predicament faced when analyzing Self-organizing Adaptive Holarchies (SAH) is to try to understand how it is possible to describe a part of them, in isolation from the rest, as a ‘well defined entity’ (= with given boundaries and characteristic patterns of organization) in the first place. Hierarchy theory sees self-organizing adaptive holarchies as entities organized through a system of filters operating in a cascade – a consequence of the ability to generate different process rates in the various activities of self-organization [20]. For example, a human individual makes decisions and change her/his daily behavior based on a time scale that relates to her/his individual life span. In the same way, the society to which she/he belongs also makes decisions and continuously changes its rules and behavior. “...slaves were accepted in the United States in 1850, but would be unthinkable of today. However, society, being a higher level in the hierarchy than individual human beings, operates on a larger spatio-temporal scale” [12]. This implies that the changes occurring at a lower frequency in the behavior of whole societies are perceived as “laws” (filters or constraints) when read from the time scale of which individual citizen are operating. That is, individual behavior is affected by societal behavior in the form of a set of constraints defining what individuals can or cannot do on their own time scale. Getting into Hierarchy Theory jargon: the higher level, because of its lower frequency, acts as a filter constraining the ‘higher frequency’ activities of the components of the lower level into some emergent property (for more see Allen and Starr [20]). Additional useful references on Hierarchy Theory are: Salthe [22, 26], Ahl and Allen [27], Allen and Hoekstra [28], Grene [29], Pattee [30], O’Neill et al. [31]. This method of organization in hierarchical systems results into ‘jumps’ or ‘discontinuities’ (called also “epistemic gaps” in the complexity community – [32]) in the rates of activity of self-organization (patterns of energy dissipation) across the levels of the holarchy. Hierarchical levels are, in fact, generated by differences in process rates related to energy conversions, which are determining the chain of relations among holons. This mechanism generating discontinuities in scales is at the real root of near-decomposability. The principle of near-decomposability (terms suggested by Simon [10], see previous quote) explains why scientists are able to study systems over a wide range of order of magnitudes, from the dynamics of sub-atomic particles to the dynamics of galaxies in astrophysics using the same set of mechanic equations. When dealing with hierarchical systems we can study the dynamics of a particular process on a particular level by adopting a description that seals-off higher and lower levels of behavior. This has been proposed as an operation of “triadic reading” by Salthe [22]. This means that we can describe, for example, in economics, consumer behavior while ignoring the fact that
SCALING IN INTEGRATED ASSESSMENT 303
consumers are organisms composed of cells, atoms and electrons; and also ignoring that economic activity necessarily requires higher-level holons including particular institutions and established patterns of trust. The concept of “triadic reading” refers to the “individuation of a pattern” of interest for the scientist among the virtual infinite number of possible patterns to be detected. This requires a previous selection of three contiguous levels of interest within the cascade of hierarchical levels through which Self-organizing Adaptive Holarchies operate. We can think of it also as a process of “epistemic filtering”. That is, when describing a particular phenomenon occurring within a SAH we have to define a group of three contiguous levels starting with: ■
■
■
Focal level – this implies the choice of a space-time window of observation at which system qualities of interest can be defined and studied using a set of “observable” qualities (that can be translated into numerical encoding variables). After this choice we can look for measurement schemes able to assign numerical values to the selected variables supposed to catch changes in the relevant qualities of our system; Higher level – the choice of a time and space differential for the dynamics on the focal level (= “the smallest duration that can be used to perceive, as separated in time, two events” and “the smallest element that can be detected”) implies that changes of the characteristics of the higher level are so slow when described on the space-time window of the Focal level that they can be assumed to be negligible. In this case, the higher level can be accounted for – in the scientific description – as a set of external constraints imposed on the dynamics of the focal level (= the given set of boundary conditions); Lower level – the gradient in time differentials across levels implies also that perturbations generated by the changing behavior of lower level components is not affecting in a relevant way the main dynamic defined on the spacetime window of the focal level description. In fact, lower level activity can be accounted for in terms of a statistical description of events occurring there. That is, we can “average out” heterogeneity in the behavior of lower level individuals. Put in another way, we can deal with lower level perturbations in the form of ‘noise’. Due to the differences in scale, the identity of lower level processes is accounted for in the “focal description” in terms of a set of initiating conditions.
For example, economic analyses describe the economic process in terms of prices determined by curves of demand and supply. This implies adopting a focal level which has a time window: (i) small enough to assume that changes in ecological processes such as climatic changes or changes in institutional settings (the higher level) are negligible; and (ii) large enough to average out ‘noise’ from processes occurring at the lower level – e.g., “non-rational” consumer behavior of artists, terrorists, or Amish is averaged out by a statistical description of the preferences of population [12].
304 COMPLEXITY AND SCALES
The epistemological predicaments implied by the ambiguous identity of holarchies As noted earlier, the concept of “holon” implies two major epistemological problems: ■
■
“Functions” (or “roles”) and “organized structures” (or “incumbents”) overlap in the real systems when coming to specific actions (e.g., Mr. Clinton and the President of the USA decide as a whole). However the two parts have different histories, different mechanisms of control and diverging local goals (e.g., the wants of Mr. Clinton as a human being in a particular moment of his life can diverge from those of US presidency as an institutional role and vice-versa). For example, the recent case of Monica Lewinsky has been about legitimate contrasting interests expressed by the dual nature of that specific holon. Unfortunately, scientific analyses trying to model holons operating within holarchies, have no other option but that of assuming a single goal and identity for the acting holon, within particular the descriptive domain associated to the selected model. The existence of multiplicity of roles for holons operating within holarchies shows the inadequacy of the traditional reductionist scientific paradigm for modeling them. For the assumption of a single goal and identity for the acting holon, necessary in this mode of analysis, restricts it to a particular model (descriptive domain), to the exclusion of all others. To get a quantitative characterization of a particular identity of a holon one has to assume the holarchy is in steady-state (or at least in quasi-steadystate). That is, one has to choose a space-time window at which it is possible to define a clear identity for the system of interest (the triadic reading, which is often expressed in the more familiar “ceteris paribus” assumption). However, as soon as one obtains the possibility to quantify characteristics of the system after “freezing it” on a given space-time window, one loses, as a consequence of this choice, any ability to see and detect existing evolutionary trends. Evolutionary trajectories are detectable only using a much larger space-time scale than that of the dynamic of interest [26]. This implies admitting that sooner or later the usefulness of current descriptive domain and the validity of the selected modeling relation will expire. For example, an exact definition of the ecological footprint of a country in a particular year depends on the adoption of a lot of space-time specific assumptions (definition of existing technical coefficients, the mix of inputs adopted in the process of production, etc.). Due to this extreme location specificity (in space and time) such an assessment “per se” does not say anything about the “performance” of the society in relation to sustainability. How does such an assessment fit with current trends? It has been generated by a temporary perturbation or it is reflecting long term changes? What is the effect of this value on the various trade-offs linked to sustainability? Put in another way, an excessive “location specific scale”, which is needed to obtain “determinacy” in the numerical assessments is often not good to obtain “meaning” for the assessments. On the other hand, if one wants to look at
SCALING IN INTEGRATED ASSESSMENT 305
evolutionary trends in holarchies, one has to accept the consequent loss of accuracy in the assessments of their details. Discussing of “meanings” has always to do with dealing with the big picture (the use of metaphors rather than models), that is losing the ability of using formal definitions based on accurate mappings. This implies accepting indeterminacy. This discussion is reminiscent of the principles of quantum mechanics articulated in the 1920’s, those of indeterminacy and complementarity. The relation is clearer when we recall that the term “measurement” is critical in that analysis. In fact, previously measurement was taken for granted as not interfering, in principle, with the physical system being measured. Using a “holarchic thinking” we can understand that the measuring apparatus belongs to a larger scale holon (the scientist providing the experimental setting), so that energy losses which are insignificant on that scale can become very significant at the micro level. Complementarity refers to the fact that holons, due to their peculiar functioning on parallel scales always require a dual description. The relational functional nature of the holon (focal-higher level interface) provides the context for the structural part of the holon (focal-lower level interface), which generates the behavior of interest on the focal level. Therefore, an holarchy can be seen as a chain of contexts and relevant behaviors in cascade. The niche occupied by the dog is the context for the actions of individual organisms, but at the same time any particular organism is the context for the activity of its lower level components (organs and cells dealing with viruses and enzymes). Established scientific disciplines rarely acknowledge that the unavoidable and prior choice of ‘perspective’ determining what should be considered the relevant action and what its context – which is implied by the adoption of a single model (no matter how complicated) – implies a bias in the consequent description of complex systems’ behavior [12]. For example, analyzing complex systems in terms of organized structures – or incumbents (e.g., a given doctor in a hospital) – implicitly requires assuming for the validity of the model: (1) a given set of initiating conditions (a history of the system that affects its present behavior). and (2) a stable higher level on which functions – or roles – are defined for these structures in order to make them “meaningful”, useful and, thus, stable in time [10]. That is, the very use of the category “doctors” implies, at the societal level, the existence of a job position for a doctor in that hospital together with enough funding for running the hospital. Similarly, to have “functions” at a certain level, one needs to assume the stability at the lower levels where the structural support is provided for the function. That is, the use of the category “hospital” implies that something (or rather someone) must be there to perform the required function [10]. In our example the existence of a modern hospital – at the societal level – implies also the existence of a supply of trained doctors – potential incumbents – able to fill the required roles (an educational system working properly). All these considerations become quite practical when systems run imperfectly, as when
306 COMPLEXITY AND SCALES
(e.g.) doctors are in short supply, have bogus qualifications, are inadequately supported, etc. Hence, no description of the dynamics of a focus level, such as society as a whole, can escape the issue of structural constraints (what/how, explanations of structure and operational going on at lower levels) and at the same time the issue of functional constraints (why/how, explanations of finalized functions and purposes, going on at or in relation to the higher level). The key for dealing with holarchic systems is to deal with the difference in spacetime domain which has to be adopted for getting the right pattern recognition. Questions related to the why/how questions (to study the niche occupied by the “canis familiaris” species or the characteristics of US Presidency) are different from those required for the what/how questions (to study the particular conditions of our neighbor’s dog related to her age and past, or the personal conditions of Mr. Clinton this week). They cannot be discussed and analyzed by adopting the same descriptive domain. Again, even if the two natures of the holon act as a whole, when attempting to represent and explain both the “why/how questions” and the “how/what questions” we must rely on complementary non-equivalent descriptions, using a set of non-reducible and non-comparable representations. As observed by O’Neill et al. [31] biological systems have the peculiar ability of being both in ‘quasi-steady-state’ and ‘becoming’ at the same time. Their hierarchical nature makes possible this remarkable achievement. They can be described as stable categories, when analyzed (as organized structure guaranteeing relational functions within a stable associative context) on the bottom of the holarchy. They should be considered as becoming systems in evolution (when considering the continuous introduction of new functions) on the top of the holarchy. This applies also to societal systems [26]. Both classes of systems are well describable as in quasi-steady-state on small space-time windows (when dealing with the identity of cells, individuals, species, jobs, institutions) and as entities which are becoming, when we use a much large space-time window that forces us to deal with the process of evolution. For example: (1) the process of biological evolution (e.g., the becoming of ecological holons) requires the use of “relevant time differentials” of thousands of years. (2) the process of evolution of institutional settings of human societies requires the use of “relevant time differentials” of centuries. (3) the process of evolution of human technology requires the use of “relevant time differentials” of decades. (4) when dealing with price formation we are dealing with a time differential of one year or less. (5) preferences and feelings of individuals can change in a second. Obviously the epistemological categories required for representing changes over these different time windows are distinct. To make things more complicated, complex adaptive systems tend to pulse and operate in cyclic attractors, so that we have an additional problem. Scientific analyses should be able to avoid confusing movements of the system
SCALING IN INTEGRATED ASSESSMENT 307
over predictable trajectories in a given state space, with changes due to the genuine emergence of new evolutionary patterns. Genuine emergence requires, in fact, an updating of the set of tools used to represent system’s behavior (e.g., a continuous change in the identity of the state space used in the analysis – the introduction of new epistemological categories and different modeling relations). In conclusion, by choosing an appropriate window of observation we can isolate and describe, in simplified terms, a domain of the reality – the one we are interested in. In this way it is possible to define boundaries for specific systems, which can be considered, then, as independent entities from the rest of the holarchy to which they belong. The side effect of this obliged procedure, however, is the neglect, either aware or unaware, of: (1) dynamics and other relevant features which are occurring outside the space-time differential selected in the focal descriptive domain; (2) changes in other system’s qualities which were not included in the original set of observable qualities and encoding variables used in the model. When dealing with becoming systems, the evolution of the system requires a parallel evolution in the identity of its descriptive domains (requiring different definitions of state spaces) to be usefully described. Put in another way, we must be aware that when applying a triadic filtering to the reality we are choosing just one of the possible nonequivalent descriptive domains for our system. Modeling means a “heroic simplification of reality” [33] based on a previous definition of a “time duration” for the analytical representation. This explains why there can be no complete, neutral, objective study of a holarchic system, and why these systems are “complex” in the sense of having multiple legitimate perspectives. Bifurcation, emergence and scientific ignorance “Bifurcation” in a modeling relation and emergence Rosen [23] suggests the term “bifurcation” to indicate the existence of two different representations of the same Natural System, which are logically independent of each other. The concept of bifurcation entails the possibility of having two (or more than two) distinct formal systems of inferences, which are used on the basis of different selection of encoding variables (or focal level of analysis) to establish different modeling relations for the same “natural system”. As noted earlier bifurcations are therefore entailed by different goals for the mapping. The concept of bifurcation implies the possibility of a total loss of ‘usefulness’ of a given mapping. For example, imagine that we have to select an encoding variable to compare the “size” of London (U.K.) and Reykjavik (Iceland). London would result larger than Reykjavik, if the selected encoding for the quality “size” is the variable population. However, by changing the choice of encoding variable, London would result smaller than Reykjavik if the perception of its “size” is encoded by the variable: ‘number of letters making up the name’ (= a new definition of the relevant quality to be
308 COMPLEXITY AND SCALES
considered when defining the size of London and Reykjavik). Such a choice of encoding could be performed by a company which makes road signs. In this trivial example the bifurcation is generated by a change in the set of goals and context (in the logic) related to the use of such a mapping. Two non-equivalent observers. (1) Someone willing to characterize “London” perceiving this name as a proxy for a city will adopt an identity which includes an epistemological category for its size that can have as a proxy – population size. (2) Someone working in a company making road-signs, perceiving this name as a string of letters to be written in its product, will adopt an identity which includes an epistemological category for its size based on the “demand of space on road-sign”. The proxy for this system quality will be the number of letters making up the name. Clearly, the existence of a different “logic” in selecting the “category” and the “proxy” used to encode what is relevant in the quality “size” is related to a different meaning given to the perception of the natural system “London” (its identity to be adopted in the modeling process). Obviously this is then reflected into numerical assessments which are no longer necessarily supposed to be neither reducible into each-other or directly comparable by the application of an algorithm. A bifurcation in the system of mapping can be seen as – as stated by Rosen [23: p302] – “the appearance of a logical independence between two descriptions”. Clearly such a bifurcation depends on the intrinsic initial ambiguity in the definition of the natural system when using symbols or codes. The same label “London” can be perceived as a name of a city made up of people or “London” as a 6-letter-word. As observed by Schumpeter [34: p42] – “Analytical work begins with material provided by our vision of things, and this vision is ideological almost by definition”. Obviously, bifurcations in systems of mappings (reflecting differences in logic) can entail bifurcations also in the use of mathematical systems of inference. For example a statistical office of a city recording the effect of the marriage of two “singles” already living in that city and expecting a child would map the consequent changes implied by these events in different ways according to the encoding used for assess changes in the quality “population”. The event can be described either as: 1 + 1 → 1 (both before and after the birth of the child) if the mapping of population is done using the variable “number households”. In alternative as: 1 + 1 → 3 (after the birth of the child) if the mapping is done in terms of “number of people” living in the city. In this simple example, it is the definition of the mechanism of encoding (implied by the choice of the identity of the system to be described – i.e., “households” versus “people” – which entails different mathematical descriptions of the same phenomenon). The concept of bifurcation has also a positive connotation, in relation to the possibility of increasing the repertoire of models and metaphors available to our knowledge. In fact, a direct link can be established between the concept of “bifurcation” and the concept of “emergence”. Using again the wording of
SCALING IN INTEGRATED ASSESSMENT 309
Koestler [17] we have a “discovery” – Rosen [23] suggests to use for this concept the term “emergence” – when two previously unrelated frames of reference are linked together. Using the concept of equivalence classes both for organized structures and relational functions, we can say that “emergence” or “discovery” is obtained: (1) when assigning a new class of relational functions (which implies a better performance of the holon on the focal/higher level interface) to an old class of organized structures or (2) when using a new class of organized structures (which implies a better performance of the holon on the focal/lower level interface) to an existing class of relational functions. An emergence can be easily detected by the fact that it requires changing the identity of the state space used to describe the new holon. A simple and well known example of “emergence” in dissipative systems is the formation of “Bénard cells” (a special pattern appearing in a heated fluid when switching from a linear molecular movement to a turbulent regime – for a detailed analysis from this perspective see Schneider and Kay [35]). The emergence (the formation of a vortex) requires the need of using in parallel 2 non-equivalent descriptive domains to properly represent such a phenomenon, since the process of self-organization of a vortex is generating both “an individual organized structure” and “the establishment of a type”. We can use models of dynamic of fluids to study, simulate and even predict this transition. But no matter how sophisticated these models are they can only guess the insurgence of a type (= under which conditions you will get the vortex). From a description based on the molecular level it is not possible to guess the direction of rotation that will be taken by a particular vortex (if clockwise or anti-clockwise). Whereas, at a larger scale, any particular Bénard cell, because of its personal story, will have a specific identity, that will be kept until it remains alive (so to speak). The new scale of operation of a vortex (above the molecular one), that at which we can detect the direction of rotation, implies the use of a new epistemological category (i.e., clockwise or anti-clockwise) to properly represent such a phenomenon. Put in another way, the information required to describe the transition on two levels (characterizing both the individual and the type) can not be all retrieved describing events at the lower level. In conclusion, whereas it is debatable whether or not the concept of emergence implies something “special” in ontological terms, it is clear that it implies something “special” in epistemological terms. Every time we deal with something which is “more than” and “different from” the sum of its parts, we have to use in parallel non-equivalent descriptive domains to represent and model different relevant aspects of its behavior. The implications of this fact are huge. When dealing with the evolution of complex adaptive systems (real emergence) the information space that has to be used for describing how they change in time is not closed and knowable “a priori”. This implies that models, even if validated in previous occasions, not necessarily will result good in predicting future scenarios. This is
310 COMPLEXITY AND SCALES
especially true when dealing with human systems (adaptive reflexive systems). The crucial difference between risk, uncertainty and ignorance The distinction proposed below is based on the work of Knight [36] and Rosen [23]. Knight [36] distinguishes between cases in which it is possible to use previous experience (e.g., record of frequencies) to infer future events (e.g., guess probability distributions) and cases in which such an inference is not possible. Rosen [23], in more general terms, alerts on the need of being always aware of the clear distinction between a “natural system”, which is operating in the complex reality and “the representation of a natural system” which is scientist-made. Any scientific representation requires a previous “mapping”, within a structured information space, of some of the relevant qualities of the natural system with encoding variables. Since scientists can handle only a finite information space, such a mapping implies the unavoidable missing of some of the other qualities of the natural system (those not included in the selected set of relevant qualities). Using these concepts it is possible to make the following distinction between Risk and Uncertainty. Risk (= situation in which it is possible to assign a distribution of probabilities to a given set of possible outcomes – e.g., the risk of losing when playing the “roulette”). That is, RISK implies an information space used to represent the behavior of the investigated system which is: (i) closed; (ii) known; and (iii) useful (= it includes all the relevant qualities to be considered for a sound problem structuring). In this situation, there are cases in which we can even calculate with accuracy the probabilities of states included in the accessible state space (e.g., classic mechanics). That is, we can make reliable predictions. The concept of risk is useful when dealing with problems: (i) easily classifiable (about which we have a valid and exhaustive set of epistemological categories for the problem structuring). (ii) easily measurable (the encoding variables used to describe the system are “observable” and measurable, adopting a measurement scheme compatible in terms of Space-Time domain with the dynamics simulated in the modeling relation). Under these assumptions, when we have available a set of valid models, we can forecast and usefully represent what will happen (at a particular point in space and time). When all these hypotheses are applicable, the expected errors in predicting the future outcomes are negligible. Uncertainty (= situation in which it is not possible to predict what will happen). That is, UNCERTAINTY implies that we are using to make our prediction an information space, which is: (i) closed; (ii) finite; and (iii) partially useful, according to previous experience, but, at the same time, there is awareness that this is just an assumption that can fail.
SCALING IN INTEGRATED ASSESSMENT 311
The concept of uncertainty entail that the structure of entailments in the natural system simulated by the given model can change and/or that our selection of the set of relevant qualities to be used to describe the problem can become no longer valid. Therefore, within the concept of UNCERTAINTY we can distinguish between: ■
■
Uncertainty due to indeterminacy (= there is a reliable knowledge about possible outcomes and their relevance, but it is not possible to predict, with the required accuracy, the movement of the system in its accessible state space. – e.g., the impossibility of predict the weather in 60 days from now in New York City). Indeterminacy is unavoidable when dealing with nested hierarchical systems or with “reflexivity” of humans. The simultaneous relevance of characteristics of elements operating on different scales (= the need of considering more than one relevant dynamic in parallel on different space-time scales) and non-linearity in the mechanisms of controls (the existence of cross-scale feed-backs) entail that expected errors in predicting future outcomes can become high (butterfly effect, sudden changes in the structure of entailments in human societies – laws, rules, opinions). Uncertainty due to indeterminacy implies that we are dealing with problems which are classifiable (we have valid categories for the problem structuring), but that they are not fully measurable and predictable. Uncertainty due to ignorance (= situation in which it is not even possible to predict what will be the set of attributes that will result relevant for a sound problem structuring). That is, IGNORANCE implies the awareness that the information space used for representing the problem is: (i) finite and bounded, whereas the information space, that would be required to catch the relevant behavior of the observed system, is open and expanding. and (ii) our model is missing relevant system qualities. The worst aspect of scientific ignorance is that it is possible to know about it, only through experience. That is, when the importance of events (attributes) neglected in a first analysis becomes painfully evident. For example, Madame Curie, who won two Nobel Prizes for her outstanding knowledge of radioactive materials, died of leukemia. Some of the characteristics of the object of her investigations, known nowadays by everybody, were not fully understood at the beginning of this new scientific field.
There are typologies of situations in which we can expect to be confronted in the future with problems that we cannot either guess or classify at the moment. For example, when facing fast changes in existing boundary conditions. In a situation of rapid transition we can expect that we will have to learn soon new relevant qualities to consider, new criteria of performance to be included in our analyses, and new useful epistemological categories to be used in our models. That is, in order to be able to understand the nature of our future problems and how to deal with them we will have to use an information
312 COMPLEXITY AND SCALES
space different from the one used right now. Obviously, in this situation, we cannot even think of valid measurement schemes (how to check the quality of the data), since there is no chance of knowing what encoding variables (observable relevant qualities) will have to be measured. Even admitting that ignorance means exactly that it is not possible to guess the nature of future problems and possible consequences of our ignorance, this does not mean that it is not possible to predict, at least, when such an ignorance can become more dangerous. For example, when studying complex adaptive systems it is possible to gain enough knowledge to identify basic features in their evolutionary trajectories (e.g., we can usefully rely on valid metaphors). In this case, in a rapid transitional period, we can easily guess that our knowledge will be affected by larger doses of scientific ignorance. The main point to be driven home from this discussion over risk, uncertainty and ignorance is the following. In all cases in which there is a clear “awareness” of living in a fast transitional period in which the consequences of “scientific ignorance” can become very important, it is wise not to rely only on reductionist scientific knowledge. The information coming from scientific models should be mixed with that coming from metaphors and additional inputs coming from various systems of knowledge found among stakeholders. A new paradigm for science – Post-Normal Science – should aim at establishing a dialogue between science and society moving out from the idea of a one-way flow of information. The use of mathematical models, as the ultimate source of truth, should be regarded just as a sign of ignorance of the unavoidable existence of scientific ignorance. Non-reducibility (multiple causality) and incommensurability Non-reducible assessments In this section I discuss an example of legitimate non reducible assessments. The example is based again on the 4 views presented in Figure 13.1. The metaphor this time is applied to the process generating a concrete assessment. For example: “kg of cereal consumed per capita by US citizen in 1997”. Let us imagine that a very expensive and sophisticate survey is performed, at the household level, to get an “accurate” assessment of food consumption. By recording events in this way we can learn that each US citizen consumed, in 1997, 116 kg of cereals per person per year. On the other hand, by looking at the FAO Food Balance Sheet [37] – which provides for each FAO-member country a picture of the flow of food consumed in the food system – we can derive other possible assessments for the “kg of cereals consumed per capita by US citizen in 1997”. For example: ■
cereals consumed as food, at the household level. This is the figure of 116 kg per year per capita for US citizen, in 1997, discussed before. This can also be obtained by dividing the total amount of cereals directly consumed as food by the population of USA in that year.
SCALING IN INTEGRATED ASSESSMENT 313
■
■
■
consumption of cereals per capita in 1997 as food, at the food system level. This value is obtained by dividing the total consumption of cereals in the US food system by the size of US population. This assessment is more than 1,015 kg (116 kg directly consumed, 615 kg fed to animals, plus almost 100 kg of barely for making beer, plus other items related to industrial processing and post-harvest losses). amount of cereals produced in US per capita, in 1997, at the national level, to obtain an economic viability of the agricultural sector. This amount is obtained by dividing total internal production of cereals by population size. Such a calculation provides yet another assessment: 1,330 kg/year per capita. This is the amount of cereal used per capita by US economy. total amount of cereals produced in the world per capita, in 1997, applied to the humans living within the geographic border of the USA in that year. This amount is obtained by dividing total internal consumption of cereal at the World level in 1997 (which was 2 × 1012 kg), by world population size (5,800 millions). Clearly, such a calculation provides yet another assessment: 345 kg/year per capita (160 kg/year direct, 185 kg/year indirect). This is the amount of cereal used per capita by each human being in 1997 on this planet. Therefore this would represent the share assigned to US people when ignoring heterogeneity of pattern of consumption among countries.
We can use again Figure 13.1 to discuss the mechanisms in the process of generation of the assessment generating these numerical differences. In the first two cases, we are considering only the direct consumption of cereals as food. On a small scale – assessment (1) reflecting Figure 13.1A in the metaphor – and on a larger scale – assessment (2) would refer to Figure 13.1B in the metaphor. The logic of these two mappings is the same. We are mapping flows of matter, with a clear identification in relation to their role: food as a carrier of energy and nutrients, which is used to guarantee the physiological metabolism of citizens. This very definition of consumption of “kg of cereals” implies a clear definition of compatibility with physiological processes of conversion of food into metabolic energy (both within fed animals and human bodies). This implies that since the mechanism of mapping is the same (in the metaphor of Figures 13.1A and 13.1B, we are looking for pattern recognition using the same visible wave-length of the light) we can bridge the two assessments by an appropriate scaling (e.g., Life Cycle Assessment). This will require, in any case, different sources of information related to process occurring at different scales (e.g., household survey + statistical data on consumption and technical coefficients in the food systems). When considering assessment (3) we are including in such an assessment “kg of cereals” which are not “consumed” either directly or indirectly by US households in relation to their diet. The additional 315 kg of cereals produced by US agriculture per US citizen for export (assessment (3) – assessment (2)), are brought into existence only for economic reasons. But exactly because of that, they should be considered as “used” by the agri-cultural sector and the farmers of that country to stabilize its
314 COMPLEXITY AND SCALES
own economic viability. The US food system would not have worked the way it did, in 1997, without the extra income provided to farmers by export. Put it in another way, US households “indirectly used” this export (= took advantage of the production of these kg of cereals) for getting the food supply they got, in the way they did. This could be in the metaphor the pattern presented in Figure 13.1D. We are looking at the same head (the US food system in the analogy) but using a different mechanism of pattern recognition (using X-rays rather than visible light). The difference in numerical value between assessment (1) and (2) is generated by a difference in the hierarchical level of analysis. Whereas the difference between assessment (2) and (3) is generated by a “bifurcation” in the definition of indirect consumption of cereals per capita (a biophysical definition versus an economic definition). Finally, Figure 13.1C would represent the numerical assessment obtained in (4), when both the scale and the logic adopted for defining the system is different from the previous one (US citizen as members of humankind). Again it has to be noted, that these non reducible differences do not imply that any of these assessments is useless. Depending on the goal of the analysis, each one of these numerical assessments can carry useful information. Multiple causality for the same event The next example deals with multiple causality: 4 non-equivalent scientific explanations for the same event are listed in Table 13.1 (the possible death of a particular individual). This example is particularly relevant in all cases in which the explanation provided is then used as an input for the process of decision making. ■
■
Explanation 1 refers to a very small space-time scale at which the event is described. This is the type of explanation generally looked for when dealing with a very specific problem (= when we have to do something according to a given set of possibilities, perceived here and now = a given and fixed associative context for the event). Such an explanation tends to generate a search for maximum efficiency. According to this explanation we can do as good as we can, assuming that we are adopting a valid, closed and reliable information space. In political terms, these type of “scientific explanations” tend to reinforce current selection of goals and strategies of the system. For example, policies aimed at maximizing efficiency implies not questioning (in the first place) basic assumptions and the established information space used for problem structuring; Explanation 2 refers again to a small space-time scale at which the event is described. This is the type of explanation generally looked for when dealing with a class of problems that have been framed in terms of the WHAT/HOW question. We have an idea of the HOW (of the mechanisms generating the problem) and we want to both fix the problem and understand better (fine
SCALING IN INTEGRATED ASSESSMENT 315
Table 13.1: Multiple scientific explanations for a given event Event to be explained: DEATH OF A PARTICULAR INDIVIDUAL Explanation 1 --> (looking for the known HOW) Space-time scale: Example of situation: Very small Emergency room Explanation: Implications for action: No oxygen supply to the brain Apply known procedures Strong entailment of the past on present action Explanation 2 --> (looking for a better HOW) Space-time scale: Small Explanation: Affected by lung cancer
Example of situation: Medical treatment Implications for action: Apply known procedure & explore new ones Entailment of the past on present, room for exploring changes Explanation 3 --> (considering HOW to WHY) Space-time scale: Medium Explanation: Individual was heavy smoker
Example of situation: Meeting at the Ministry of Health Implications for action: Policy formulation mixing experience with aspirations for change Mixed entailment of the past and “virtual future” on present Explanation 4 --> (exploring the implications of WHY) Space-time scale: Example of situation: Very large Discussion on sustainability Explanation: Implications for action: Humans must die Dealing with the tragedy of change Entailment of the “virtual future” (passions) on present
■
tuning) the mechanism according to our scientific understanding. Again we assume that the basic stru-turing of the available information space is a valid one, even though we would like to add a few improvements to it; Explanation 3 refers to a medium/large scale. The individual event here is seen through the screen of statistical descriptions. This type of explanation is no longer dealing only with the WHAT/HOW question but also, in an indirect way with the WHY/WHAT question. We want to solve the problem, but in order to do that we have to mediate between contrasting views found in the population of individuals to which we want to apply policies. In this particular example, dealing with the trade-offs between individual freedom of smoking and the burden of health-costs for the society generated by heavy smoking. We no longer have a closed information space and a simple mechanism to determine optimal solutions. Such a structuring of the problem requires an input from the stakeholders in terms of “value judgement” (= for politicians this could be the fear of losing the next elections);
316 COMPLEXITY AND SCALES
■
Explanation 4 refers to a very large scale. This explanation is often perceived as “a joke” within a scientific context. My personal experience is that whenever this slide is presented at conferences or lessons, usually the audience starts laughing when seeing the explanation “humans must die” listed among the possible scientific explanations for the death of an individual. Probably this reflects a deep conditioning to which scientists and students have been exposed for many decades. Obviously, such an explanation is perfectly legitimate in scientific terms when framing such an event within an evolutionary context. The question then becomes why it is that such an explanation tends to be systematically neglected when discussing of sustainability? The answer is already present in the comments given in Table 13.1. Such an explanation would force the scientists and other users of it to deal explicitly and mainly with “value judgements” (dealing with the “why” or “what for” question rather than with the “how” question). Probably this is why, this type of question seems to be perceived as not “scientifically correct” according to western academic rules.
Also in this second example we find the standard predicament implied by complexity: the validity of using a given scientific input depends on the compatibility of the simplification introduced by the “problem structuring” with the context within which such an information will be used. A discussion about pros and cons of various policies restricting smoking would be considered unacceptable by the relatives of a patient in critical conditions in an emergency room. In the same way, a physiological explanation on how to boost the supply of oxygen to the brain would be completely useless in a meeting discussing the opportunity of introducing a new tax on cigarettes. Multicriteria space – dealing with incommensurability The last example of this paper deals with the problem of how to make use of the descriptive input obtained through a set of parallel, non-equivalent and reducible models. Let’s imagine that one wishes to buy a new car and wants to decide among the existing alternatives on the market. Such a choice would depend on the analysis of various characteristics (e.g., economic, safety, aesthetic and driving characteristics) of the various models of car taken into considerations. Obviously, the set of characteristics considered in Figure 13.2 is just one of the possible sets of relevant attributes, since it is not possible to generalize all the sets of possible criteria used by the population of nonequivalent car buyers operating in this world. It is sure, however, that some of the criteria (and related indicators) measuring the relevant characteristics determining such a choice will result incommensurable (e.g., price in dollars, speed in Km/h, status symbol, aesthetic preferences) and conflicting in nature (e.g., the higher the speed the higher the economic cost). Given a set of indicators we can represent the performance of any given alternative, according to the set of relevant criteria through a multicriteria impact profile, which can be represented either in a graphic form, as shown in Figure 13.2, or in a matrix
SCALING IN INTEGRATED ASSESSMENT 317
Economic characteristics l Fue tion p sum con
Ro han ad dlin g
y lit bi lia Re
e nc na ts e t in os Ma c
Safety characteristics
S acce peed/ lerat ion
Price
ty Safeices dev
ign Des Va r m iety od o els f Var iety colo of rs
Aesthetic characteristics
rt fo m o C e Nois
Driving characteristics
Figure 13.2: Multi-objective integrated representation of the performance of a car.
matrix form, as shown in Table 13.2. These multicriteria impact profiles can be based on quantitative, qualitative or both types of information. The way humans represent and structure the problem to be solved, in scientific terms, necessarily reflect the values and interests of those that will use the information. This is perfectly OK as long as this obvious fact is acknowledged and its related implications are taken into account. The same applies to the mechanism used to compare and rank possible alternative actions. From a philosophical perspective, it is possible to distinguish between two key concepts [38, 39]. (1) strong comparability (= it is possible to find a single comparative term by which all different actions can be ranked). This
318 COMPLEXITY AND SCALES
Table 13.2: Example of an impact matrix
Alternatives Criteria
Units
a 1 – car A
a 2 – car B
a 3 – car C
a 4 – car D
g 1 Price g 2 Maintenance costs g 3 Fuel consumption g 4 Road handling ... g 12 Design
US$ (1997) US$/year Liter/km Qualitative ... Qualitative
g 1(a1) . . . . g12(a1)
g 1(a2) . . . . g 12(a2)
. . . . . .
g 1(a4) . . . . g 12(a4)
implies strong commensurability (= it is possible to obtain a common measure of the different consequences of an action based on a cardinal scale). According to this hypothesis the “value” of “everything” (including your mother) can be compared to the value of “everything else” (including someone else mother) by using a single numerical variable (e.g., monetary or energy assessments). (2) weak comparability (= there is an irreducible value conflict when deciding what term should be used to rank alternative actions). This translates into the assumption that different stakeholders can exhibit different “rational choices” when facing the same specific situation. Weak comparability, however, does not imply that it is not possible to use “rationality” when deciding or that “everything goes” when coming to scientific analyses. As discussed in the second part, procedural rationality is based on the acknowledgement of ignorance, uncertainty and the existence of legitimate non-equivalent views of different stakeholders. That is, this requires, when ranking options, to agree on what is important for the stakeholder as well as to agree on what is relevant for the stability of the process described in the model. As a consequence of this fact, the validity of a given approach used to evaluate and rank possible options depends on its ability to: (1) include several legitimate perspectives (acknowledging the reflexive properties of the system) and (2) provide a reliable check on the viability of the system in relation to different dimensions of viability (technical, economic, ecological, social). This, in turn, requires “transparency” in relation to two main points: (1) quality of the participatory process (a quality check on the process of decision making): e.g., how fair and open was the discussion about problem structuring; about the choice of models used to characterize scenarios; about the choice of alternatives to be considered. How fair was the mechanism used for the final decision? (2) quality of the scientific process (a quality check on the representative tools which make the set of models used conform to given requirements): e.g., how credible are the assumptions, what are the implications of these assumptions, how good are the data; how competent are the modelers? A quality control on the available information to be used for decision making is obviously crucial: how reliable are the data used to prepare either the characterization given in Figure 13.2 or the impact matrix given in Table 13.2?
SCALING IN INTEGRATED ASSESSMENT 319
This last question points at an additional problem: whenever it is impossible to establish exactly the future state of the problem faced, one can decide to deal with such a problem either in terms of stochastic uncertainty (thoroughly studied in probability theory and statistics) or in terms of fuzzy uncertainty (focusing on the ambiguity of the description of the event itself) [40]. However, as noted earlier, one should always be aware that genuine ignorance is always there too. This predicament is particularly relevant when facing sustainability issues, because of large differences in scales of relevant descriptive domains (e.g., between ecological and economic processes) and the peculiar characteristics of reflexive systems. In these case it is unavoidable that the information used to characterize the problem is affected by subjectivity, incompleteness and imprecision (e.g., ecological processes are quite uncertain and little is known about their sensitivity to stress factors such as various types of pollution). A great advantage of multicriteria evaluation (compared with conventional “monocriteria” Cost Benefit Analysis) is the possibility to take these different factors into account.
PART 2 – Implications of Complexity and Scales on Integrated Assessment The epistemological predicament of sustainability analysis In Part 1 the concept of “complexity” has been presented according to the theoretical framework proposed by Robert Rosen [1, 23, 41]. In Rosen’s view complexity implies the impossibility to fully describe the behavior of a given natural system by using a single model (or a finite set of reducible models) of it. This impossibility derives from the unavoidable epistemological dimension of the very perception and definition of “a system” in the first place and the consequent existence of legitimate and logically independent ways of modeling the behavior of any adaptive nested hierarchical system. Put in another way, the usefulness of any scientific representation of a complex system cannot be defined ‘a priori’, without considering the goal for which this representation has been generated. As a general principle we can say that by increasing the number of reciprocally irreducible models used in parallel for mapping systems’ behavior (this is what integrated assessment is all about) we can increase the richness and usefulness of any scientific representation. The good news implied by this concept is that: (1) it is often possible to catch and simulate relevant aspects of the behavior of a complex system even when having an incomplete knowledge of it. The bad news is that: (2) any “perspective” on a complex system (comprehensive and consistent knowledge – interpretation of the system including modeling relations) will necessary miss some of the elements and/or relevant relations in the system. Scientific models of complex systems (even if extremely complicated) imply the generation of errors (due to the unavoidable neglecting of some relevant relations referring to events – or
320 COMPLEXITY AND SCALES
patterns – detectable only on distinct space-time scales or in different systems of encoding). In more technical jargon Rosen [23, 41] refers to this fact as the unavoidable existence of “bifurcations” in any mapping of complex systems. We can reduce the effect of these errors by using in parallel various mutually irreducible “perspectives” (by generating “mosaic effects” in our scientific representation [32]). However, this solution: (i) does not solve completely the problem; (ii) introduces another source of arbitrariness in the resulting analysis. In fact, the very concept of complexity implies that a virtually infinite number of mutually irreducible “perspectives” (modeling relations) can and (depending on the objective of the analysis) should be considered to fully describe the behavior of a “real” natural system. Therefore, any selection of a limited set of mutually irreducible perspectives to be used in an integrated assessment (= a multicriteria description able to generate a mosaic effect and based on a finite set of relevant criteria or attributes) can only be based on a subjective decision about the relative relevance of the selected set of perspectives (why should we limit the analysis only to the selected set of criteria?). For example, when selecting an airplane pilot, is her/his zodiacal sign (or her/his religious belief) one of the relevant criteria to be considered? Probably a commercial airline would definitely exclude these two criteria from its screening process. On the other hand, it could very well be that an eccentric millionaire (or an integralist religious group) when looking for a pilot for her/his/their private jet could decide to include one (or both) of these criteria among the relevant pieces of information to be considered in the process of pilot selection. Every time we are dealing with a decision about the relevance or irrelevance of the set of criteria to be considered in the integrated assessment we cannot expect to find general algorithms which will make possible to escape “value judgements”. The irreducibility of possible perspectives that should be considered as relevant when structuring the description of a natural system (= determining the selection of variables used in the modeling relation) implies that there is always a “logical independence” in the various selections of relevant “qualities” of the system. That is, it is only after deciding (how?) the set of relevant qualities to be considered in the scientific analysis which it becomes possible to discuss about encoding variables and consequently about models to be developed. On the other hand, scientific information already available – based on the selection of models done in the past – can affect the “feelings” of stakeholders about which are the most relevant qualities to be considered. This is a typical problem of “reflexive systems” which is at the core of the new paradigm for science for sustainability proposed under the name of Post-Normal Science [42, 43, 44, 45, 46, 47] This fact has also another important consequence for scientific analyses. When dealing with non-equivalent, alternative models which can be used to represent the behavior of a given complex system, we cannot check or compare their “validity” by focusing only on a single aspect of system behavior at the
SCALING IN INTEGRATED ASSESSMENT 321
time. The “validity” of a given model is not simply related to its ability to make good simulations and consequently predictions on changes that will occur for a particular system quality. Even when the predictions of a model are supported by experimental evidence, this does not guarantee that: ■
■
■
such a quality is relevant for a sound structuring of the problem. This is related to the well-known trade-off between “accuracy” and “relevance” of scientific models. We can increase the accuracy of modeling relations by adding assumptions that make the model less and less credible and applicable to real situations. We can remember here the example of the broken clock that happens to indicate the right time twice a day versus a clock, which loses 5 seconds every day. This second clock will never indicate the right time in the next year, but still result much more useful than the first one in the next month. In this example, the ability of being perfectly right twice a day does not coincide with the ability of being useful. the modeling relation valid at the moment under a given “ceteris paribus” hypothesis will retain its ability to model the same system again in the future when some conditions and characteristics (external and/or internal to the system) will change. The validity of useful models of real systems expires. This is due to the fact that real systems evolve in time, whereas formal systems of inference are out of time. nobody “cheated” in collecting the data used to validate the model. This observation carries a completely new domain of “quality control” to be added to the evaluation process. Without social trust in the process generating the integrated assessment, technical aspects of the models can become totally irrelevant.
A new conceptualization of “sustainable development”: moving from “substantial” to “procedural” rationality It is often stated that Sustainable Development is something that can only be grasped as a “fuzzy concept” rather than expressed in terms of an exact definition. This is due to the fact that sustainable development is often imagined as a formal, static concept that could be defined in general terms without the need of using, any time we are applying it to a specific situation, several internal and external semantic checks. The only way to avoid the “fuzzy trap” implied by such a substantive concept of sustainability is to move away from a definition, which is of general application (= it is related to some predefined optimizing function related to a standard associative context). We should rather look for a definition which is based on (and implies the ability of performing) internal and external semantic “quality checks” on the correct use of adjectives and terms under a given set of special conditions (at a given point in space and time). These “quality checks” should be able to reflect the various perceptions of the stakeholders found within a defined context. Clearly, these perceptions depend on the particular point in space and time at which the application of general principles occurs (this implies also a strong dependency on the history of the local system – e.g., cultural identity of
322 COMPLEXITY AND SCALES
various social groups, existing institutions and power structure, existence of shared goals and trust among stakeholders). The main point here is that a definition of Sustainable Development can be given (see below) but only after assuming that within a given society it is possible to obtain these semantic and quality checks. In this case we can say that the concept of Sustainable Development should be defined in a different way. What I propose is: “the ability of a given society to move, in a finite time, between satisficing, adaptable, and viable states”. Such a definition implies that sustainable development has to do with a process (procedural sustainability) rather than with a set of once-and-for-all definable system-qualities (substantive sustainability) (note: I am using the distinction between substantive and procedural rationality proposed by Simon [3, 4]). Put in another way, sustainability implies the following points: ■
■
■
■
governance and adequate understanding of present predicaments – as indicated by the expression: “the ability to move, in a finite time,”; recognition of legitimate contrasting perspective related to the existence of different identities for stakeholders (implying the need of: (i) an adequate integrated scientific representation reflecting different views; and the possibility of having: (ii) institutionally organized processes for negotiation within the process of decision making) – as indicated by the expression: “satisficing” (again a term suggested by Simon [3]) as opposed to “optimizing”; recognition of the unavoidable existence of uncertainty and indeterminacy in our understanding, representation and forecasting of future events – as indicated by the expression: “adaptable”. When discussing of adaptability (= the usefulness of a larger option space in the future): (i) reductionist analyses based on “ceteris paribus” hypothesis have little to say; and (ii) incommensurability implies that “optimal solutions” cannot be detected applying algorithmic protocols (the information space needed to describe the performance of the system is expanding and therefore cannot be mapped by any closed formal inferential systems); availability of sound reductionist analyses able to verify within different scientific disciplines the “viability” of possible solutions in terms of existing technical, economic, ecological and social constraints – as indicated by the expression: “viable”.
I personally believe that reaching a societal agreement on a procedural definition of sustainable development is a possible task. However, this would require a paradigm shift in the way scientific information is generated and organized when providing inputs to the process of decision making. To conclude this section I would like to quote Herbert Simon [4] in relation to the concept of “satisficing” solutions. When there is indeterminacy or complexity it is no longer possible to get rid of deliberation. The formation of human perceptions and preferences should be considered as part
SCALING IN INTEGRATED ASSESSMENT 323
of the problem of decision [48]. In fact, decision making is influenced by the decision-maker’s mind: “A body of theory for procedural rationality is consistent with a world in which human beings continue to think and continue to invent: a theory of substantive rationality is not”[4].
Conclusion In this paper I tried to convince the reader that there is nothing transcendent about complexity, something, which implies the impossibility of using sound scientific analyses (including reductionist ones). For sure, the processes of decision making about sustainability we need more and more rigorous scientific input to deal with the predicament of sustainability faced by humankind in this new millennium. On the other hand, complexity theory can be used to show clearly the impossibility to deal with decision making related to sustainability in terms of “optimal solutions” determined by applying algorithmic protocols to a closed information space. When dealing with complex behaviors we are forced to look for different causal relationships among events. However, the various causal relations found by scientific analyses will depend on decisions made in the pre-analytical structuring of the problem. We can only deal with the scientific representation of a nested hierarchical system by using a strategy of stratification (= by using a triadic reading based on the arbitrary selection of a focal space-time differential able to catch one dynamic of interest at the time). In order to be able to use fruitfully science, when discussing of sustainability, humans should just stop pretending that their processes of decision making are based on the ability to detect the “best” of the possible courses of action, after applying standard protocols based on reductionist analyses. This has never been done in the past, it is not done at the present, and it will never be done in the future. Any “decision” always implies a political dimension, since it is based on imperfect information and a given set of goals. Otherwise it should be called “computation” (R. Fesce; personal communication). The confusion on this point is often generated by the fact that, in the last decades, in Western countries the “elite” in power, for various reasons, decided to pretend that they were taking decisions based on “substantive rationality”. Clearly, this was simply not true, and the clash of reductionist analyses against the issue of sustainability in these decades is clearly exposing such a faulty claim. Complex systems theory can help in explaining the reasons of such a clash. Any definition of priorities among contrasting indicators of performance (reflecting legitimate non-equivalent criteria) is affected by a bias determined by the previous choice of how to describe events (the ideological choices in the preanalytical step...). That is, such a choice reflects the priorities and the system of values of some agent in the holarchy.
324 COMPLEXITY AND SCALES
When dealing with the problem of how to do a sound problem structuring, we are in a classic example of a chicken-egg situation. The results of scientific analyses will affect the selection of what is considered relevant (how to do the next pre-analytical step) and what is considered relevant will affect the results of scientific analyses. This chicken-egg pattern simply explains the co-existence of alternative, non-equivalent and legitimate “structuring” of sustainability problems in different human groups separated by geographic and social distances. After acknowledging this fact, we cannot expect that scientists operating within the given set of assumptions of an established disciplinary field can be able to boost the “quality” of any process of problem structuring on their own. In order to do that, they need to work with the rest of the society. Therefore, the only viable way out of this epistemological predicament is an integrated assessment based on transdiciplinary analyses and participatory techniques. That is, by establishing an iterative interaction between scientists and stakeholders as implied by the concept of “procedural rationality”. The unavoidable existence of reciprocally irreducible models and the goal of increasing the richness of scientific representation, however, should not be misunderstood as an invitation to avoid decisions on how to compress in a useful way the set of analytical tools used to represent and structure our problems. On the contrary, the innate complexity of sustainability issues requires a rigorous filter on sloppy scientific analyses, poor data, inadequate discussion of basic assumptions. Reciprocally irreducible models may have significant overlap in their descriptive domains. In this case, the parallel use of non-equivalent models dealing with the same system can be used not only to increase the richness of scientific representation, but also help to uncover inconsistencies in the basic hypotheses of the different models, numerical assessments, and predicted scenarios. An application of this rationale in terms of biophysical analyses of sustainability is provided in Giampietro and Mayumi [49, 50]. This is another important application of complexity and multiple scales for integrated assessment. The problem of “how to improve the quality of a decision process” has not been considered as relevant by “hard scientists” in the past. However, the new nature of the problems faced by humankind in this third millennium implies a new challenge for science. This new terms of reference is especially important for those working in integrated assessment.
References 1. 2.
Rosen, R., 1977. “Complexity as a System Property.” International Journal of General Systems, 3: 227 –232. Mandelbrot, B. B., 1967. “How Long is the Coast of Britain? Statistical Self-Similarity and fractal dimensions.” Science, 155: 636–638.
SCALING IN INTEGRATED ASSESSMENT 325
3.
4. 5. 6. 7. 8. 9.
10. 11.
12. 13.
14.
15.
16.
17. 18.
19. 20.
Simon, H. A., 1976. From substantive to procedural rationality. In: J. S. Latsis (ed.). Methods and Appraisal in Economics. Cambridge: Cambridge University Press. Simon, H. A., 1983. Reason in Human Affairs. Stanford: Stanford University Press. Glansdorf, P., and I. Prigogine, 1971. Structure, stability and fluctuations. Chichester, United Kingdom: Wiley-Interscience. Nicolis, G., and I. Prigogine, 1977. Self-Organization in Nonequilibrium Systems. New York: Wiley-Interscience. Prigogine, I., and I. Stengers, 1981. Order out of Chaos. New York: Bantam Books. Prigogine, I., 1978. From Being to Becoming. San Francisco: W. H. Freeman and company. O’ Neill, R. V., 1989. Perspectives in hierarchy and scale. In: J. Roughgarden, R. M. May, and S. Levin (eds.). Perspectives in Ecological Theory. Princeton: Princeton University Press: 140–156. Simon, H. A., 1962. “The architecture of complexity.” Proceedings of the American Philosophical Society, 106: 467–482. Whyte, L. L., A. G. Wilson, and D. Wilson (eds.), 1969. Hierarchical Structures. Inc., New York: American Elsevier Publishing Company. Giampietro, M., 1994a. “Using hierarchy theory to explore the concept of sustainable development.” Futures, 26: 616–625. Giampietro, M., 1994b. “Sustainability and technological development in agriculture: a critical appraisal of genetic engineering.” BioScience, 44: 677–689. Giampietro, M., S. G. F. Bukkens, and D. Pimentel, 1997. The link between resources, technology and standard of living: Examples and applications. In: L. Freese (ed.). Advances in Human Ecology. Greenwich (CT): JAI Press: 129–199. Giampietro, M., and G. Pastore, 2001. Operationalizing the concept of sustainability in agriculture: characterizing agroecosystems on a multi-criteria, multiple-scale performance space. In: S. R. Gliessman (ed.). Agroecosystem Sustainability: Developing Practical Strategies. Boca Raton: CRC Press: 177–202. Kampis, G., 1991. Self-Modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information and Complexity. Oxford: Pergamon Press: 543 pp. Koestler, A., 1968. The Ghost in the Machine. New York: The MacMillan Company: 365 pp. Koestler, A., 1969. Beyond Atomism and Holism – the concept of the Holon. In: A. Koestler and J. R. Smythies (eds.). Beyond Reductionism. London: Hutchinson: 192–232. Koestler, A., 1978. Janus: A Summing Up. London, Hutchinson. Allen, T. F. H., and T .B. Starr, 1982. Hierarchy. Chicago: The University of Chicago Press.
326 COMPLEXITY AND SCALES
21. Bailey, K. D., 1990. Social Entropy Theory. Albany, New York: State University of New York Press. 22. Salthe, S. N., 1985. Evolving Hierarchical Systems: Their Structure and Representation. New York: Columbia University Press. 23. Rosen, R., 1985. Anticipatory Systems: Philosophical, Mathematical and Methodological Foundations. New York: Pergamon Press. 24. Gibson, C., E. Ostrom, and T-K. Ahn, 1998. Scaling Issues in the Social Sciences. IHDP Working Paper. International Human Dimensions Programme on Global Environmental Change (IHDP). www.uni-bonn.de/ihdp. 25. Mayr, E., 1982. The Growth of Biological Thought: Diversity, Evolution and Inheritance. Cambridge, Massachusetts: Belknap Press. 26. Salthe, S., 1993. Development and Evolution: Complexity and Change in Biology. Cambridge, Massachusetts: The MIT Press. 27. Ahl, V., and T. F. H. Allen, 1996. Hierarchy Theory. Columbia: University Press. 28. Allen, T. F. H., and T. W. Hoekstra, 1992. Toward a Unified Ecology. New York: Columbia University Press. 29. Grene, M., 1969. Hierarchy: one word, how many concepts? In: L. L. Whyte, A. G. Wilson, and D. Wilson (eds.). Hierarchical Structures. New York: American Elsevier Publishing Company: Inc.: 56–58. 30. Pattee, H. H., (ed.), 1973. Hierarchy Theory. New York: George Braziller, Inc. 31. O’Neill, R. V., D. L. DeAngelis, J. B. Waide, and T. F. H. Allen, 1986. A Hierarchical Concept of Ecosystems. Princeton, New Jersey: Princeton University Press. 32. Prueitt, P. S., 1998. Manhattan Project. George Washington University, BCN Group. [http://www.bcngroup.org/area3/manhattan/manhattan.html] 33. Georgescu-Roegen, N., 1971. The Entropy Law and the Economic Process. Harvard University Press, Cambridge: Massachusetts. 34. Schumpeter, J. A., 1954. History of Economic Analysis. ltd, London: George Allen & Unwin. 35. Schneider, E. D., and J. J. Kay, 1994. Life as a manifestation of the second law of thermodynamics. Mathematical and Computer Modelling 19: 25–48. 36. Knight, F. H., 1964. Risk, Uncertainty and Profit. A. M. Kelley. New York: chap. 7. 37. FAO Agricultural Statistics [http://apps.fao.org/cgi-bin/nph-db.pl? subset=agriculture]. 38. Martinez-Alier, J., G. Munda, and J. O’Neill, 1998. “Weak comparability of values as a foundation for ecological economics.” Ecological Economics, 26:277–286. 39. O’Neill, J., 1993. Ecology, Policy and Politics. London: Routledge.
SCALING IN INTEGRATED ASSESSMENT 327
40. Munda, G., 1995. Multicriteria Evaluation in a Fuzzy Environment. Theory and Applications in Ecological Economics. Heidelberg: Physica-Verlag. 41. Rosen, R., 1991. Life Itself: A Comprehensive Inquiry into the Nature, Origin and Fabrication of Life. New York: Columbia University Press. 42. Funtowicz, S. O., and J. R. Ravetz, 1990. Uncertainty and Quality in Science for Policy. Dordrecht, The Netherlands: Kluwer. 43. Funtowicz, S. O., and J. R. Ravetz, 1991. A New Scientific Methodology for Global Environmental Issues. In: R. Costanza (ed.). Ecological Economics. New York: Columbia University Press: 137–152. 44. Funtowicz, S. O., and J. R. Ravetz, 1992a. Three Types of Risk Assessment and the Emergence of Post-Normal Science. In: S. Krimsky and D. Golding (eds.). Social Theories of Risk. Praeger, Westport: Conn. and London: 251–273. 45. Funtowicz, S. O., and J. R. Ravetz, 1992b. “The Good, the True and the Post-Modern.” Futures, 24: 963–976. 46. Funtowicz, S. O., and J. R. Ravetz, 1994a. “The Worth of a Songbird: Ecological Economics as a Post-Normal Science.” Ecological Economics, 10: 197–207. 47. Funtowicz, S. O., and J. R. Ravetz, 1994b. “Emergent Complex Systems.” Futures, 26: 568–582. 48. Faucheux, S., G. Froger, and G. Munda, 1997. Toward an integration of uncertainty, irreversibility, and complexity in Environmental Decision Making. In: J. C. J. M. van den Bergh and J. van der Straaten (eds.). Economy and Ecosystems in Change – Analytical and Historical Approaches. Cheltenham, UK: 50–74. 49. Giampietro, M., and K. Mayumi (guest eds.), 2000. “Multiple-Scale Integrated Assessment of Societal Metabolism: presenting the approach.” Special Issue of Population and Environment, 22: 97–254. 50. Giampietro, M., and K. Mayumi. (guest eds.), 2001. “Multiple-Scale Integrated Assessment of Societal Metabolism: case studies.” Special Issue of Population and Environment, 22: 257–352.
14 Scaling in Integrated Assessment: Problem or Challenge? JAN ROTMANS International Centre for Integrative Studies, University of Maastricht, The Netherlands
Introduction It is increasingly recognized that scale is a core methodological problem in many scientific fields. This is particularly true for Integrated Assessment, which operates by definition on multiple scales, both in time and space. Thus, the European Forum for Integrated Environmental Assessment (EFIEA) workshop on scaling organized by the International Centre for Integrative Studies (ICIS) in Maastricht, aimed at collecting state-of-the-art knowledge on scales from a variety of angles, was quite timely. Based on this state-of-the-art representation, the building blocks for a potential research agenda for scaling in Integrated Assessment can be defined. Regarding the state-of-the-art, much scaling “handwork” has been done in Integrated Assessment (IA) modelling. But it mainly concerns statistical upand down-scaling techniques to move from a lower spatial scale level to a higher one and vice versa. Notwithstanding the usefulness of these statistical techniques, we now realize that much more is needed to represent multiple scales in IA-modelling. Furthermore, other tools commonly used in IA, such as scenario building, are in need of innovative multiple scale methods. Finally, a largely unexplored field is the relation between scaling and uncertainty. In general, this paper gives a portfolio of ideas how to deal with scaling in IA-tools & methods. Rather than discussing in-depth the relation between scaling and a particular IA-instrument, we touch upon a number of scaling issues and present some ideas how to incorporate multiple scales in IA-tools & methods. First of all we address the overall methodological problem behind scaling in Integrated Assessment. Then we discuss scaling in IA-modelling, treating three different heuristic scaling-methods that are currently used. Next, we sketch scaling in IA-scenarios, giving two recent examples of multiple-scale scenario assessments. We then discuss scaling in relation to the representation of
330 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
agents, followed by a brief discussion of scaling and uncertainty. We finish with a set of recommendations for future IA-research.
What is the problem? To illustrate the problem of scaling we start with a phrase of van der Veen, an economist at the Twente University: “Economists are not used to thinking in terms of geographical space”. He indicates that key elements of economic science, such as information flows, money, prices and virtual markets, do not have explicit geographical components. However, he argues, economic phenomena, e.g. the diffusion of information and technology and the transport of goods and materials (both intentional and unintentional), are spatial by nature. Paradoxically, economists feel most comfortable in an administrative space rather than in a geographical space (van der Veen, 2000). In general terms, scale is the dimension used to measure or assess a phenomenon (Ostrom, E. et al., 2000). Usually, we distinguish between two different types of scale: the geographical or spatial scale, and the temporal scale. As we will see in this paper, there is also a third important scale, which we refer to as functional scale. Each scale has an extent and a resolution. The extent is the overall size or magnitude of the spatial or temporal dimension. The resolution is the precision used in measurement or assessment. For example, a model may have a spatial extent of a country and a resolution of 1 km by 1 km. Similarly it may have a temporal extent of 50 years with a resolution of 5 year (i.e. results are determined for every 5 year increment). Levels are then defined as units of assessment that are located at the same position on a scale, referring to location along a scale. Spatial levels we can distinguish are micro-, meso-, and macro, whereas common temporal levels are short, medium and long-term. Science is the search for and the explanation of observed patterns. The act of identifying particular patterns means that choices about scale, extent and resolution have to be employed. Patterns may appear at one level and be lost at another (Gibson et al., 2000). A cellular biologist for example, identifies patterns at the level of an individual cell, whereas a doctor works with organs that are clusters of cells. Whereas the natural sciences have since long understood the importance of scale and have relatively well-defined hierarchical systems of analysis, the social sciences have long worked with scales of less precision and greater variety and have not worked with well-defined conceptions of scale. Scale is at the heart of Integrated Assessment because the complex societal problems that it tries to address involve multiple scales in time and space. The different knowledge patterns that IA tries to combine, interpret and communicate involve a priori a variety of scales. Scale matters for IA for various reasons. First of all the driving forces of complex societal problems
SCALING IN INTEGRATED ASSESSMENT 331
arise from different domains with their own scale characteristics. But on the other hand the impacts of complex problems play also out differently in different domains. The response mechanisms (including institutional structures) differ also along different scales. And, following the definition of IA of Rotmans (1998), scales are important to combine, interpret and communicate different knowledge patterns to make a sound and comprehensive IA. There are three major problems involving scale in Integrated Assessment. The first is how to combine a variety of processes which differ by nature in time, i.e. how to order unlike processes in time? The second is how to do so in a spatially explicit way, i.e. how to order and allocate unlike processes in space? And the third is that we need to go beyond the traditional scale dimensions to represent human behaviour: next to the temporal and spatial scale we need a third dimension that demarcates the functional relationships between agents. The deeper problem is that no unifying theory exists that is capable of describing and explaining much of the dynamic behaviour at various scales of social, economic and ecological activity of interest to IA practitioners. This is in contrast to, for example, the unifying theory of mechanics explaining the acceleration of small bodies in free fall as well as the orbit of large planetary bodies. Thus, improving our knowledge base of the interlinkages between large-scale and small-scale processes within and across scientific disciplines is one of the daunting challenges of our time. In the absence of an overarching scaling theory we mostly use heuristic methods in the IA-field. In the sections hereafter we present some of these heuristics as used and applied in the field of IA-modelling, IA-scenarios, agent-based IA-representations and uncertainty.
Scaling in IA-Models Integrated Assessment models are frameworks that structure the nature of a problem in terms of causalities. These frameworks are generally computerbased models, which quantitatively describing the cause-effect relationships of a particular problem or issue, also in relation with other problems or issues. Most current IA-models are rooted in systems analysis and global modelling, a tradition that started in the early seventies with the Club of Rome (Meadows et al., 1972). The second generation of IA-models more explicitly addressed environmental issues such as acid rain and global climate change. The third and current generation of IA-models is focusing on sustainable development, also covering non-environmental issues like human health, city development, water, transport and tourism. IA-models are intended to be flexible and rapid assessment tools, enabling the exploration of interactions and feedback, and which can be used for communication with a broad group of stakeholders. Still, many IA-models face some limitations and drawbacks,
332 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
including their abstract level of representation, deterministic character, and inadequate treatment of the various types and sources of uncertainty. Here we will focus more particularly on how spatial scale is addressed in a number of IA-models. The great majority of IA-models operate on one particular spatial scale level. Many IA-models operate on the global scale level, with only a minority on the regional and local scale level. In terms of temporal scale, most IAmodels act on a long time scale, of 50 years or even longer. Hardly any model operates on multiple scale levels. Quite a few IA-models, however, do try to use heuristics and simple algorithms for tackling the issue of allocating the spatial distribution of certain types of environmental change, notably land use. In general, we can distinguish three of these techniques: grid-cell based modelling cellular automata modelling multiple-scale regression modelling. Grid-cell based modelling Grid-cell IA-models make use of a grid-pattern that is laid over the global functions taken up in these models. Usually, these IA-models are modular by structure, where different modules (submodels) could have different grid-cell resolutions. There is a certain imbalance in the grid-cell representation of the processes represented in these models. Overlooking the temporal disaggregation in the various modules, the time horizon is common, but the time steps of the various modules vary considerably, from one day to five years. In terms of spatial disaggregation, the situation is more imbalanced. The major social, economic, demographic and technological driving forces are represented in a highly aggregated manner, and not at the grid-cell level. The physical modules, such as the atmosphere-ocean or terrestrial-aquatic modules, however, do act on the grid-cell level. So the states and impact modules are often represented at a fairly detailed grid-cell level, e.g. on a grid scale of 0.5 latitude and 0.5 longitude. And finally, the response functions, if involved anyhow, are not grid-cell based. So we see a serious flaw between major driving forces as determinants of long-term change, which operate at the global or world regional level, and physical processes that are modelled at a fairly detailed grid-cell level. For instance, in the IMAGE 2.1 model, one of the more advanced IAmodels of global climate change, there is a laudable attempt to simulate in geographic detail the transformation of land cover as a result of changes in land use, climate, demography and economy (Alcamo et al., 1998). However, a major determining factor behind land cover is the land management parameter, which is specified at the world regional level, where there are 13 world regions distinguished in the model. A final comment is that there is no dynamic interaction among the grid cells in the models. So the representation of dynamic processes in the model is identical for each grid cell, without dynamically influencing each other just
SCALING IN INTEGRATED ASSESSMENT 333
as is the case with cellular automata models. So the overall conclusion is that the grid-cell presentation of IA-models suggests much more precision than can be fulfilled, and could even be misleading for non-modellers. Cellular Automata modelling Cellular automata models are based on grid-cells that communicate with each other in an intelligent manner. The dynamic state of each cell depends on the state of the surrounding cells, the characteristics of the cells, and the distance to the core cell. Usually, these types of models operate at two different scale levels: the local level (micro-level) and the regional level (macro-level). For example, see the cellular automata models as developed by Engelen et al. (1995). In the case of dynamic land use representation, at the local level the suitability for land use types is determined for each cell. At the regional level the amount of land needed is calculated and allocated. An integrated model integrates social, economic and ecological processes based upon the amount of land is estimated and allocated. The term cellular automata model suggests that local dynamics determines the ultimate land use. But the real dynamics is determined by macroscopic trends rather than by suitability on the microscale. Other drawbacks are that the rules for determining the suitability are rather controversial, and the rules behind the ‘clustering mechanism’ are not well known. Further, the relations between cells are dependent on the scale levels themselves. So the overall conclusion is that cellular automata model seems more suitable for micro-scale level on a relatively short time scale. The reliability of cellular automata models on the macro-scale level seems rather low, just as the reliability on the longer time scale. So the presentation of geographicallyexplicit results on both the macro- and micro-level may be misleading. Multiple-scale regression modelling Multiple-scale regression models are models that include two or more spatially-explicit scales at which land use is allocated. An example of a multiple-scale regression model is the CLUE-model as described in Verburg (2000). On a relatively coarse scale general land use trends and the land use driving mechanisms that act over a longer distance are calculated. On a relatively fine scale local land use patterns are calculated, taking local constraints into account. The land is allocated on the two levels (coarse and fine) based on complex interactions among socio-economic, biophysical and land use constraints. The dynamics of changing land use is based on correlations (regression analysis) and not on causal mechanisms. Because these correlations are assumed to be constant, the time scale is relatively short (5–10 years). So the overall conclusion is that multiple-scale modelling seems a promising method, but is more directed towards the spatial component than the temporal component. The correlation basis makes it a quasistatic method rather than a dynamic method.
334 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
All heuristic scaling modelling methods presented above have their pros and cons. It is hard to judge whether one scaling method is to be preferred to another one. Further, these methods are not mutually exclusive at all. But unfortunately they represent different schools that hardly communicate with each other (Verburg, 2000). But blending the cellular automata approach with the multiple scale, where correlations are replaced by causalities, would already mean a tremendous step forward. Scaling in IA-scenarios Scenarios are descriptions of journeys to possible futures that reflect different perspectives on past, present and future developments with a view to anticipating the future (van Notten and Rotmans, 2001). Scenario analysis has evolved significantly over the past decades. In their early days, scenarios were used primarily as planning and forecasting tools, displaying a rather mechanistic and deterministic worldview. Later, scenario analysis moved beyond merely fulfilling a decision-support function to one that also supports a more open form of exploration. Nowadays, scenarios have evolved into powerful exploratory tools: they do not predict the future, but rather paint pictures of possible futures and explore the various outcomes that might result if certain basic assumptions are changed. So currently, scenarios are often used to broaden, deepen and sharpen the mindset of stakeholders involved in a process of exploring possible futures (van Notten, 2002). In the field of scenario development, scaling is an underdeveloped issue. A screening of 40 existing scenarios on sustainable development indicated that almost all scenarios were developed at one scale level (Rotmans et al., 2000). This mono-scale level orientation is surprising but also worrysome, with only a few exceptions. A few exceptions concern the latest IPCC-scenarios (IPCC, 2000), the so-called SRES-scenarios, and the scenarios developed for the 3rd Global Environmental Outlook (UNEP, 2002). The IPCC SRES-scenarios focus on changes in economic, technological and demographic trends and energy use as major drivers for global climate change. Specifically, the scenarios explore the global and regional dynamics that may result from changes at a political, economic, demographic, technological and social level, see Figure 14.1. The distinction between classes of scenarios was broadly structured by defining them ex ante along two dimensions. The first dimension relates to the extent both of economic convergence and of social ands cultural interactions across regions; the second has to do with the balance between economic objectives and environmental and equity objectives. This process therefore led to creation of four scenario “families” or “clusters”, each containing a number of specific scenarios. The first cluster of scenarios [A1] is characterised by fast economic growth, low population growth and the accelerated introduction of new, cleaner and
SCALING IN INTEGRATED ASSESSMENT 335
economic A1
A2
global
regional B2
B1
technolog y
se d -u lan
y energ
po pu lat ion eco nom y
environmental
Figure 14.1: The IPCC SRES scenarios as branches of a two-dimensional tree. The dimensions indicate the relative orientation of the different scenarios in relation to economic or environmental concerns, and global and regional development patterns
more effective technologies. Under this scenario, social concerns and the quality of the environment are subsidiary to the principal objective: the development of economic prosperity. Underlying themes combine economic and cultural convergence, and the development of economic capacity with a reduction in the difference between rich and poor, whereby regional differences in per capita income decrease in relative (but not necessarily absolute) terms. The second cluster of scenarios [A2] also envisages a future in which economic prosperity is the principal goal, but this prosperity is then expressed in a more heterogeneous world. Underlying themes include the reinforcement of regional identity with an emphasis on family values and local traditions, and strong population growth. Technological changes take place more slowly and in a more fragmented fashion than in the other scenarios. This is a world with greater diversity and more differences across regions. In the third cluster [B1], striving for economic prosperity is subordinate to the search for solutions to environmental and social problems (including problems of inequity). While the pursuit of global solutions results in a world characterised by increased globalisation and fast-changing economic structures, this is accompanied by the rapid introduction of clean technology and a shift away from materialism. There is a clear transformation towards a more service and information-based economy. And finally, the fourth cluster [B2] sketches a world that advances local and regional solutions to social, economic and ecological problems. This is a heterogeneous world in which technological development is slower and more varied, and in which considerable emphasis is placed on initiatives and innovation from local communities. Due to higher than average levels of education and a considerable degree of organisation within communities, the pressure on natural systems is greatly reduced. Martens and Rotmans (2002) already mentioned some of the shortcomings of the IPCC SRES-scenarios. Their scope is rather narrow, focusing, as mentioned, on population growth, technological and economic development as the major
336 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
drivers, whereas the broader social, cultural and institutional context is lacking. The scope of these scenarios was broadened by Martens and Rotmans (2002), relating them to key developments in water, biodiversity, health and tourism. Also, major surprises, bifurcations and additional policy interventions are missing, indicating the rather extrapolative and linear thinking underlying these futures. Further, the quantitative aspect is so dominant that it impairs the broad scope introduced by the underlying storylines. From the multi-scale perspective the IPCC SRES-scenarios mean a step forward compared to previous sets of IPCC-scenarios, in the sense of the distinction between global and regional scenarios. But still, the coupling between the global and regional scale level is rather loose and not dynamic at all. The global and regional scenarios themselves were not developed with a consideration of how they feed back to each other. So these IPCC SRES-scenarios are not really multi-scale, and this rudimentary multiple scale approach needs to be improved over the next couple of years. A better example of a multi-scale scenario endeavour is the GEO-3 scenario process. In developing the UNEP-GEO-3 scenarios, there was prolonged discussion focusing on the questions of global versus regional and centralised versus decentralised development of and representation in the scenarios. Regional participation and flexibility would be needed to develop the scenarios, but global coherence would also need to be maintained. It was decided to incorporate fully regional views and participation while maintaining a general global framework that builds on the extensive global work that had already been undertaken (e.g. IPCC 2000, Cosgrove and Rijsberman 2000, and Raskin et al. 1998). There would be “mutual conditioning” whereby globally consistent themes were to be developed and the regions then given the flexibility to take these issues further. In each region a core team was put together, and existing global scenarios as developed by the Global Scenario Group (Raskin et al., 1998) were used to inform these regional teams. Based on this global scenario context the regional teams produced regional storylines which emerged into regional narratives, which then fed back to and ultimately led to modifications of the global scenarios. Initially, there remained an unnatural separation between the global and regional narrative scenarios, with little of the detail in the regional narratives represented in the global narratives, and little of the global context and the importance of relationships between the regions reflected in the regional narratives. To address this, the global and regional narratives were integrated to present more holistic stories of the next three decades. The social and environmental implications across the different scenarios at the global and regional scales were assessed. These presented more detailed quantitative analyses that had been undertaken in support of the scenario narratives. Further, regional experts looked at the implications of the different scenarios for specific events or developments within each region. The blending of the regional and global narratives was difficult and it took some time for a shared understanding to be achieved, and more feedback
SCALING IN INTEGRATED ASSESSMENT 337
from the regions would have led to an even higher level of integration of the global and regional scenarios. But the overall result was interesting, and led to a set of four integrated scenarios, with the names: market first, policy first, security first and sustainability first. For an elaborate description of the scenarios the reader is referred to the GEO-3 report (UNEP, 2002). A final scenario example which is interesting from the multi-scale perspective is the VISIONS project. The VISIONS project (1998–2001) was an innovative endeavour in the development of scenarios and integrated visions for Europe. VISIONS’ overarching goal was to demonstrate the many linkages between social-cultural, economic and environmental processes, and to show the consequences of these interactions for the future of Europe and European regions from an integrated viewpoint. To achieve these ambitions, a variety of methods were used to develop challenging scenarios for Europe in an innovative and scientifically sound way. It was therefore decided to develop exploratory scenarios that investigate a broad range of long-term futures rather than to develop decision scenarios that primarily generate short-term strategic options. The scenarios would be highly divergent, descriptive rather than normative in nature, and integrate relevant social-cultural, economic, environmental and institutional dimensions. The project was meant to be and experimental arena for testing participatory methods in conjunction with IA-models, supporting the policy-making process for sustainable development. A unique feature of the endeavour was the use of multiple time and geographical scales. The final scenarios include staggered time intervals that reach 50 years into the future. Global developments provide the context for European scenarios and for three sets of scenarios for three representative European regions: the North-West UK, the Italian city of Venice and the Dutch Green Heart area. For these three European regions and for Europe as a whole different sets of scenarios were developed, using different combinations of participatory and analytical processes. For Europe as a whole, a participatory process of mutual learning was used, based on the so-called “storyline” approach. This approach combined knowledge provided by experts through lectures with “free-format” brainstorming by stakeholders. These storylines were fleshed out and enriched, which ultimately lead to three European scenarios: Big is Beautiful, Knowledge is King and Convulsive Change. In Venice, the Green Heart area and North-West UK sets of stakeholder-based scenarios were also developed: four scenarios for NW-UK and Venice, and three for the Green Heart and Europe were developed. A common format of factors-actors-sectors was used for designing these scenarios, which describe paths to different European and regional futures. The factors are: equity, employment, consumption and environmental degradation. The sectors are: water, energy, transport and infrastructure. And the actors are: governmental bodies, NGOs, businesses and scientists. The scenarios were developed from qualitative stakeholder input and then underpinned with quantitative information where deemed appropriate. Further, action-reaction
338 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
mechanisms, bifurcations and surprises were included to counter the overall tendency of many scenarios to merely extrapolate from the past and present and exclude deviations from a particular line of development. In the final phase of the VISIONS project the regional and European scenarios were integrated into integrated visions. These visions are narratives that describe the complex patterns that emerge from the dynamics caused by action-reaction patterns, and that are overlooked in any single-scale scenario study. Integrated visions help to assess complex dynamics and to identify conflict and consensus between different scales and perspectives. The framework for an integration methodology was developed at the start of the project and was further determined during the development of the scenarios. Scenarios were compared in terms of tensions and similarities. This comparative analysis was used to filter out a sensible selection of 144 (4 × 4 × 3 × 3) possible scenario combinations. Interesting combinations of dynamics between Europe and the regions and interregional interactions that cannot be seen at a single level were explored in detail. Two similarity quartets and one tension quartet were selected following the filtering and exploration of the combinations. The respective quartets indicated harmony and conflict between regional and European interests. Overall, that resulted in three integrated visions: Living on the Edge, which depicts a European risk society with many extremities and chaotic situations and managed by some form of permanent crisis management. In Europe in Transition the major regional and European developments mutually reinforce each other, leading to a transition to a modern European society with structural changes in the field of work, lifestyles, governance, technology and economy, but with many growing pains. And finally Shadows of Europe Ltd, a European Superstate with scale-enlargement in business and government, but also in research, education and NGOs, resulting in a Europe of competition and market functioning, with winning but also losing regions: a divided Europe with many tensions, and a crisis in public governance which generates much confusion and many tensions. For more information on VISIONS the reader is referred to Rotmans et al. (2000 and 2003). In general, playing around with different spatial and temporal scale levels in scenarios is essential. Usually, the higher the scale level, the more ambitious the policies formulated in scenarios. However, implementing those policies at lower scale levels is another story. Thus, exercises that make the tensions explicit between the different scale levels in terms of policy formulation versus realisation are very useful. Similarly, many policy strategies in scenarios are formulated in the long-term. If it is not clearly indicated what those long-term strategies mean in terms of concrete policy actions, the scenario exercise is only of limited value. And finally, the driving forces and autonomous dynamics need to be expressed and linked at different scale levels. A nice example is globalisation: rather than supposing that globalisation develops similarly along different scale levels, one could suppose countervailing responses at lower scale levels, such as globalisation.
SCALING IN INTEGRATED ASSESSMENT 339
Agents and scale An emerging development in the modelling arena is the phenomenon of agent-based modelling. Also within the Integrated Assessment community agent representation has emerged as an important issue (Rizzoli and Jakeman, 2002). The basic question that we could ask is: ‘why do we need agent-based models?’ and in particular ‘why do we need agent-based IA-models?’ A number of arguments can be put forward to address this question. Perhaps the most valid argument is that we want to enhance our still poor insight into the dynamic interplay among agents, both in terms of individual agents, and collective agents like institutions and organisations. Until recently, human behaviour, and in particular the behaviour of agents such as stakeholders, has been left out of IA-models and scenarios apart from the representation drawn from neo-classical economics, where agent behaviour follows rational, pricedriven decision rules. Jaeger et al. (2000) refer to this as the rational actor paradigm. Most of us know, however, that this is not an adequate way of representing human agents, especially in IA-models. This touches upon a second reason for implementing human behaviour in IA-models, we urgently need to offer an alternative to the rational actor paradigm that still prevails in IAmodels. Especially the representation of the interaction among human agents, directly and indirectly influencing each other, which is largely neglected in the rational actor paradigm, is of importance. A further reason is that the inclusion of institutional dynamics through the representation of collective agents in IA-models is of crucial importance. Whereas the economic, ecological and social dimensions are often included in IA-models, the institutional dimension is almost always lacking. Finally, agent representation in IA-models seems a promising way of involving stakeholders more actively in the modelling process. In general, we can distinguish three categories of stakeholders in the IA-modelling process: stakeholders as advisors, where the knowledge and experience of stakeholders is used; stakeholders as users, where stakeholders use IA-models for various reasons, either strategic and managerial, or for educational or moral reasons; and finally, stakeholders as actors, where the stakeholders’ behaviour is a part of the IAmodel. From a methodological point of view, the last case is the most interesting but also the most troublesome, which we will further discuss below. This is not to imply that incorporating agency into IA problems does not pose problems. The first problem is that we have to deal with a wide range of agents, varying from individual agents as consumers, to collective agents as institutions and organisations. Due to the high abstraction level of physical and geographical processes in many IA-models, collective agents naturally coincide more with this level of abstraction than individual agents. But the majority of agent-based models focus on individual agents, representing many of them, sometimes hundreds if not thousands, all identical in their behaviour. Hardly any agent-based model deals with the representation of
340 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
institutions or organisations, so there is not much we can learn from, apart from some theoretical cognitive research and conceptual modelling work of collective agents (Conte, 2001). In general, the cognitive basis for the representation of collective agents as institutions and organisations is poor. Conte and Castelfranchi (1999) introduced some ideas on social norms as attributes that distinguish institutions and organisations from individual agents. The final problem relates directly to the scaling issue: the variety of agents operates at different scale levels. But agents do not operate primarily on a geographical scale level, but on a functional scale level, that relates to the nature of the functional relationships they have with other agents. This is the “magic” third scaling dimension, next to space and time, to which we referred earlier in this article. One way of representing different functional scale levels for agents is to use a discretization framework. An example of such a discretization form is the multi-scale level concept as formulated for innovation of technologies by Geels and Kemp (2000) which distinguishes between the macro-, meso- and micro-level. Applying this to the multi-agent setting, delivers three functional scale levels for different kinds of agents. At the macro-level transnational authorities are operating, such us UN-agencies and multinationals. At the meso-level institutions and organisations operate, and at the micro-level individual agents. The chosen structure for agents as developed within the FIRMA-project (see box), is that of social, autonomous agents with the following characteristics: goals, beliefs, social norms and modes of interaction. Goals are states of the world desired by a particular agent, which is an assumption for agent activities; beliefs represent the particular worldviews (perspectives) of an agent; social norms are obligations on a set of agents to accomplish or abstain from a certain action; and modes of interaction represent the different manners and levels of interaction between agents. We then distinguish between individual agents and collective agents such as institutions, the latter defined as supra-individual systems deliberately designed or spontaneously evolved to regulate the behaviour of individual agents. What collective agents distinguishes from individual agents is the interest that they have, a stake to pursue a certain goal for a group of agents. The rationale behind these attributes of an agent is that they function partly in an autonomous manner, based on their internal set of criteria, and partly through the interaction with other agents. Relating these attributes to the different scale levels yields the following picture, as depicted in Figure 14.2. Goals and beliefs of agents are then represented to the macro-level developments, where perspectives change at the macro-level, influencing the goals and beliefs of agents. The social norms of agents are related to regime developments at the meso-level, where interests play an important role. The constraints of local circumstances do play a role at the micro-level, and could be considered as niche developments.
SCALING IN INTEGRATED ASSESSMENT 341
Box 14.1: The FIRMA project The European project FIRMA (Freshwater Integrated Resource Management with Agents) aims to improve water resource planning by combining agent-based modelling and integrated assessment modelling. The very idea is to represent the dynamic behaviour of water managers in their specific institutional and organisational context on the one hand, and the physical, hydrological, social and economic aspects of water resource management on the other hand, in an integrated manner (Gilbert et al., 2002). Six casestudies all over Europe have been selected as study object, and one of the case-studies is the Meuse, in particular the Limburg part of it in the Netherlands. Below we will briefly discuss this case-study from an integrated angle. Meuse Case-study The ongoing planning of Dutch part of the Meuse is a complex, long-term project, called the Maaswerken, involving three main activities: flood control, improvement of the navigation route and nature development. This will be achieved by a combination of deepening and widening of the summer bed, lowering of the floodplains and side gullies, altering embankments, and upgrading the navigation infrastructure. The proposed model is meant to be a tool for developing a long-term vision of the management of the river Meuse. Because of the complexity of this case study, a successful modelling solution can only be achieved by applying an integrated approach to assess the impacts of the planned measures incorporating the various perspectives of stakeholders by means of agent-based social simulation. The agent-based model applied in the Meuse case study is based on a complex, cognitive agent approach developed by social psychologists and integrated assessors (Krywkow et al., 2002). Agents represent stakeholders, referred to as actors with their particular world views and actions within the modelled target system. The internal structure of a cognitive agent consists of goals, beliefs, norms and constraints (Conte and Castelfranchi, 1995). The agent may be seen as an independent subprogram capable of reflecting on its own goals and beliefs by comparing them to the changing environment at different functional scale levels. The goals and beliefs can adapt to a changing world as well as the changing behaviour of other agents. Adjustments will be triggered by reaching threshold values such as the height of dykes, the area of nature development, the amount of gravel to be extracted, the costs of measures, etc. The Integrated Assessment model portrays the relevant processes related to the management of the Meuse. It is structured according to the concept of PressureImpact-State-Response (PSIR) (Rotmans et al., 1997). The simulation model includes simple hydrological modules to calculate the effects of various river engineering alternatives of the Maaswerken project on the state of the water balance in the province of Limburg. Impact modules relate these results to consequences for river functions such as safety, shipping and nature. Input to the IA-model is derived from a set of perspective-based scenarios that sketch possible changes in climate and socioeconomic boundary conditions in a consistent manner.
342 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
Box 14.1 (continued) The Integrated Assessment model and the agent-based model have been coupled in the form of a prototype. The prototype is a highly simplified form of the conceptual model and thus of reality, but it is meant to do some experiments in a straightforward way in order to shed some light on the complex interactions between the agents world and the physical world. In this way we are able to simulate and analyse two types of processes: (i) agent-environment interaction: responding to changing river bed geometry, nature development, floods, pollution, side-effects of measures, etc.; and (ii) agent–agent interaction: communication about planned measures, negotiation process according to the goals and beliefs of the agents, coalition forming, etc. Figure 14.3 gives a representation of an institutional agent as part of the agentbased IA-model. The Figure shows how the different attributes of the institutional agent (goals, beliefs, social norms and constraints) are coupled to different functional scales. Whereas the goals and beliefs are influenced by trends and developments at the macro-level, social norms are more determined by regime developments at the meso-level, and constraints are set by niche developments at the local level. So while an institutional agent on the whole is operating at the meso-level, the other functional scale levels do influence the attributes of the agent.
Macrolevel (landscape) Transnational authorities
Mesolevel (regimes) Institutions and Organisations
Microlevel (niches) Individual agents Figure 14.2: Different scale levels of agent representation
SCALING IN INTEGRATED ASSESSMENT 343
Figure 14.3: Multiple-scale representation of an institutional agent
In conclusion we can say that the incorporation of agents in IA-models is still in its infancy stage. To represent agents at different functional scale levels is a bridge too far at this point in time. Conceptually, we can use a discretized multi-scale level concept and link different functional scale levels to different types of agents. But there is not any operational IA-model that has implemented such a multi-functional scale concept. Some prototyping versions are emerging, however, as for example within the FIRMA-project, which gives some insight for the further development of multiple-scale agent-based IA-models.
Uncertainty and scale The relationship between uncertainty and scale issue is hardly addressed in the IA-literature (Rotmans, 1998). In 1999 in Baden bei Wien an EFIEAworkshop was organised on uncertainty, but the scaling issue was largely ignored at this workshop. Still, however, there is a natural relationship between uncertainty and scale. When we scale up or down processes we automatically introduce new errors and thus uncertainties. By using up- and down-scaling techniques we disaggregate or aggregate in time or/and space. For instance, we can use statistical techniques or metamodels to disaggregate in time model outcomes
344 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
inexactness
lack of observations/ unreliability Natural randomness practically immeasurable Value diversity Behavioural variability
Uncertainty due to variability
Societal randomnesss Technological surprise
Uncertainty due to lack of knowledge
conflicting evidence
reducible ignorance
structural uncertainty
indeterminacy
irreducible ignorance
Figure 14.4: Typology of sources of uncertainty
from monthly estimates to daily estimates, or if we can disaggregate in space from a 5 × 5 grid cell pattern to a 0.5 × 0.5 grid cell pattern. By using these statistical techniques and metamodels we introduce new errors and thus also sources of new uncertainties. However, often these uncertainties do not appear in the results presented. A commonality between uncertainty and scale is that both issues are often treated by IA-modellers as technical problems that can be “solved” by analytical techniques rather than doing a profound analysis. Based on lessons from IAresearch over the last decades, however, current insights indicate that these issues need to be addressed in a broader, multi- and inter-disciplinary context, and preferably in a trans-disciplinary context, involving a broad range of stakeholders. Because uncertainty is a many-headed monster, it is difficult to define uncertainty. Specifying different types and sources of uncertainty would help to clarify the relations with scales. One way of doing this is by using a typology of uncertainties which takes account of different sources of uncertainty. We use here a typology developed by van Asselt (2000) (see Figure 14.4), which enables analysts to differentiate between uncertainties and to communicate about uncertainties in a more constructive manner. The taxonomy is meant to be generic, i.e. applicable to all contexts. This implies that it should be possible to trace revealed uncertainties back to one or more sources of the taxonomy. At the highest aggregation level, the taxonomy makes the distinction between two major sources of uncertainty: that due to variability, and that due to limited knowledge. Uncertainty due to variability reflects the fact that the system/process under consideration can behave in different ways or is valued differently, so variability is an attribute of reality (ontological). As indicated in Figure 14.4, sub-sources considered are nature randomness, value diversity, behavioural variability, societal randomness and technological surprise.
SCALING IN INTEGRATED ASSESSMENT 345
Uncertainty due to limited knowledge refers to the limited state of our current knowledge and to the limited knowledge of the analysts performing a study (epistemological). Subsources considered for this source are unreliability (inexactness, lack of observations/measurements, practically immeasurable) and structural uncertainty (conflicting evidence, reducible ignorance, indeterminacy and irreducible ignorance). In further exploring the relationship between uncertainty and scale, we have to specify the nature of the uncertainty in terms of the various sources of uncertainty. The continuum of uncertainty thus ranges from unreliability on the one hand, to more fundamental uncertainty, also referred to a structural uncertainty. Uncertainties in the category of unreliability are usually measurable or can be calculated, in the sense that they stem from well-understood systems or processes. This implies that in principle either margins or patterns can be established, so that usually the uncertainty can be described quantitatively (either in terms of a domain or stochastic equation). On the other end of the continuum are fundamental uncertainties which can at best be roughly estimated. Such fundamental uncertainty generally arises due to conflicting evidence, ignorance and indeterminacy. From analysing the different sources of uncertainty it becomes obvious that the structural uncertainties are fundamental by nature and scale-independent. Relating these uncertainties to various scales, either temporal or spatial, won’t change the nature of these uncertainties. In case of the unreliability as source of uncertainty, the coupling with various scales could make a difference. In particular in uncertainty sources as inexactness and lack of data/measurements, the relation with the scale level is of vital importance. This means that we first have to identify the nature of the uncertainties and the underlying sources, before we can couple uncertainties to scale levels. In studying the relations between uncertainty and scales we distinguish between the coupling of temporal scale and uncertainty vis à vis the coupling of spatial scale and uncertainty. Here we do not take into account the third dimension of scale, the functional scale. Regarding the linkage of temporal scale and uncertainty, a key issue is whether the uncertainty changes if the temporal scale changes. Many systems show variability over shorter time scales (e.g. daily rainfall) that often averages out over longer time periods (e.g. monthly rainfall). We have seen that variability is a key source of uncertainty, which implies that the uncertainty will necessarily increase if we try to model processes at a finer temporal resolution. This means that downscaling in time, i.e. from a coarser to a finer temporal resolution will add a new source of uncertainty (and thus error), higher temporal variability. On the other hand a courser temporal resolution will imply a higher unreliability as source of uncertainty. The greater the time horizon, the greater the unreliability, due to the uncertain knowledge of future (or past) political, social-cultural, economic, environmental and institutional change, and the lack of data and observations.
346 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
The same line of reasoning holds for the linkage between spatial scale and uncertainty. Many systems show variability on a smaller spatial scale (e.g. on a local scale) that often averages out on a larger spatial scale (e.g. on a national scale). Knowing variability as a key source of uncertainty, this implies that the uncertainty will necessarily increase if we try to model processes at a smaller spatial resolution. This means that downscaling in space, i.e. from a larger to a smaller spatial scale will add a new source of uncertainty, higher spatial variability. On the other hand a larger spatial scale may imply a higher unreliability because it may be harder to get data and observations on a larger scale. Apart from these single-scale uncertainty relations, a multiple scale spatial analysis induces more uncertainty than a single-scale analysis. A serious problem here is the linkage of the scale levels which is a large source of uncertainty, because our fundamental lack of knowledge of the interlinkages between the scale levels. The overall picture of uncertainty in relation to scaling is quite ambiguous. We need to dive into the sources of uncertainty before we can further specify these relations. But even then the picture is mixed. In general, multiple scale analysis induces more uncertainty than single-scale analysis. With regard to temporal scales and uncertainty, the variability as source of uncertainty increases as the temporal resolution becomes finer, but the unreliability as source of uncertainty usually decreases. Regarding spatial scales a similar picture unfolds. The smaller the spatial scale level the higher the variability as source of uncertainty, but the lower the unreliability because more reliable data and observations are usually available.
Is there a solution? If no unifying theory exists, how could we address the scaling problem in Integrated Assessment? Without giving the ultimate solution, we present three possible ‘escapes’, all of them heuristics. The first one is using up- and down-scaling techniques. Downing et al. (this volume) present a survey of statistical up- and down-scaling techniques which have been developed during the past decades. They present five different upscaling techniques in order to go from the site-level to the regional level, although these terms are not precisely defined. In the field of climate change research these upscaling techniques are used to upscale climate impacts from the local to the regional level, whereas downscaling techniques are used to downscale rough climate patterns from General Circulation Models (GCMs) to more local levels. In applying these up- and down-scaling techniques (both statistical and non-statistical), however, we must be careful. From complex systems theory we know that up- and down-scaling techniques fail in many cases for various reasons (Peterson, 2000). Major reasons are that different processes dominate at different scale levels, that in complex systems various processes are usually non-linearly linked to each other embedded in spatial heterogeneity,
SCALING IN INTEGRATED ASSESSMENT 347
that these processes at different scales do not function independently of one another, and that the pace of these processes may be different at different scale levels. In other words, while heuristic up- and down-scaling methods assume homogeneity and linearity, complex systems behave in a highly heterogeneous and non-linear way. In practice, this means that only a few characteristics of the system under concern are up- or down-scaled, while the other characteristics remain constant. For example, in upscaling human-induced climate impacts for the agricultural sector from the site to the regional level, soil and weather characteristics are usually scaled up, while the water and nutrient availability as well as the effects of diseases and pests and the management type do remain constant. With regard to downscaling techniques, there is still a gap between the quantitative results using these techniques and the overall qualitative assessments that make use of these results. As the IPCC (2000) quotes: “While a large variety of downscaling techniques have been developed in the past decade, they have not yet provided climate impact research with the required robust estimates of plausible regional and local climate change scenarios, mainly because global climate change models have not yet provided sufficiently converged consistent large-scale information to be processed through downscaling. However, the gap might be filled within a few years.” A second possibility is to use heuristic concepts which are rooted in complex systems theory. An example forms the concept of a hierarchy, which is defined as a causally linked system for grouping phenomena along an analytical scale (Gibson et al., 2000). Hierarchy theory supposes that a phenomenon at any scale level (level n) is the synergistic result of the faster dynamics among components at the next lower scale level (level n 1) and simultaneously constrained or controlled by the slower dynamics of components at the next higher scale level (level n+1) (Cash and Moser, 2000). So, the starting point of hierarchy theory is to dissect any complex system as a series of hierarchical entities. This is a useful theory, but it is far from comprehensive, and does not resolve the real scaling problem, nl. which processes at scale level n are contributory to the dynamics ate scale level n + 1. What hierarchy theory delivers is a procedure to convert all processes at scale level n to scale level n+1 by means of an extensive parameterisation procedure, but without a selection mechanism for the most determining. What could be useful, however, is to use concepts from this theory, such as that of ‘emergent properties’. In particular hierarchies, the so-called constitutive nested hierarchies (where most complex systems fall under), processes grouped together at a lower scale level can cluster into a new group of processes with new properties or functions. This means that in constitutive nested hierarchies a group of processes can have different properties showing different behaviour at a higher scale level than the individual processes at a lower scale level. We call this new collective behaviour at a higher scale level an emergent property. For example, consciousness is not a property of
348 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
individual neurones, but a natural emergent property of the neurones of the nervous system. Neurones have their own structure, but as a whole they have properties that none of the individual neurones have, namely consciousness, which can only exist by co-operation of individual neurones. Hence, only looking at the scale of individual neurones the system as a whole can never be understood properly. In IA terms an emergent property can be defined as a characteristic of a system under concern that is only recognisable when different domains and different scale levels are analysed or modelled. So studying emergent properties requires an integrated assessment, i.e. a multi-domain and multi-scale approach. Easterling and Kok (this volume) relate the concept of an emergent property to surprises and counter-intuitive results. In detecting emergent properties by studying multiple scales and domains, the nature of the problem may change entirely. This means that emergent properties are of vital importance in IA-modelling, because the dynamics of the system underlying the IA-model is also dependent on emergent properties. In IA-modelling terms this means that emergent properties may appear when experimenting with the IA-model as a whole, but may not be recognised at the submodel (module) level. These emergent properties may arise from the interaction among submodels of the IA-model. Most emergent properties are related to uncertainty due to the natural variability in the system under concern. These emergent properties may occur at every possible scale both in time and space, and can be spotted by detecting so-called ‘weak signals’ (van Notten, 2002) which may become ‘strong signals’ after a while. For a diversity of examples of emergent properties, from biological to socio-economic the reader is referred to Easterling and Kok (this volume). We present here only a very simple example of an emergent property as presented in the article of Root and Schneider (this volume), where the DICE integrated climate assessment model is extended with a simple two-box ocean model which enables a parametrised representation of the thermohaline circulation, in order to simulate the socio-economic damage as a result of an emergent property, the possible but hypothetical reverse of the thermohaline circulation (or reverse of the Gulfstream). Replacing a fully parametrised ocean representation by a simple two-box ocean model introduces a different scale which allows for the emergent property of the reverse of the thermohaline circulation. Another possibility is to use cross-scaling concepts or methods, i.e. concepts or methods which go across various scales and are basically not scaledependent. An example is the Strategic Cyclical Scaling (SCS) method (Root and Schneider, 1995). This method involves continuous cycling between large and small-scale assessments. In modelling or scenario terms such an iterative scaling procedure implies that a specific global model or scenario is disaggregated and adjusted to a specific region, country or river basin. The new insights are then used to improve the global version, after which implementation for another region, country or river basin follows. The SCS
SCALING IN INTEGRATED ASSESSMENT 349
Stabilization Indicator(s) for social development
Acceleration
Predevelopment
Take-off
Time Figure 14.5: Four phases of the transition curve
method can be used for conceptual validation of models and scenarios. In Root and Schneider (this volume) this SCS method is more specifically used within the context of Integrated Assessment and IA-models. An overall problem that remains, however, is that there is no specific strategy how to treat the unlike socio-economic, ecological and institutional processes in the continuous cycling procedure. So far the SCS-method is more directed towards ecological up- and down-scaling, whereas the method needs to be tailored more to the specific multi-domain characteristics of IA-models. Another example of a cross-scaling method is the transition concept. The heuristic concept of transition is developed to describe and explain long-term transformation processes in which society or a subsystem changes in a fundamental way over a period of one or two generations (i.e. 25 years or more) (Rotmans et al., 2001). The term transition refers to a change from one dynamic equilibrium to another, represented by an S-shaped curve as depicted in Figure 14.5, which denotes the speed, magnitude and time period of change. Transitions are interesting from a sustainability point of view because they constitute possible routes to sustainability goals. The transition concept is built up around two concepts: the multi-phase concept, and the multi-level concept. The multi-phase concept concerns four phases: the predevelopment phase, the take-off phase, the acceleration phase and the stabilization phase. The multi-phase concept tries to describe the nonlinear pattern of the interference of short-term fluctuations and long-term waves, with alternating patterns of rapid change in periods when processes reinforce each other (in the take-off and acceleration phase), and periods of slow change (in the predevelopment and stabilization phase).
350 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
The second pillar of the transition concept is the multi-level concept. This concerns three levels, based on Geels and Kemp (2000): the macro-level which describes the changes in the landscape, determined by slow changes in political culture, worldviews and social values; the meso-level at which regimes of institutions and organisations determine dominant rules and practices; and the micro-level at which alternative ideas, technologies and initiatives are developed by individuals or small groups in so-called niches. An essential feature of a transition is the spiralling effect, due to multiple causality and coevolution of interdependence between economic, social-cultural, technological, environmental and institutional developments. This spiralling effect can only happen if developments, trends and policies at the macro- meso- and microlevel reinforce each other and work into the same direction. Transitions are not a law of nature, they do not determine what eventually must happen, but what might happen. Transitions are development pathways which have been experienced on a certain scale and may happen on other scales as well. The scale division into macro-, meso- and micro-levels is a relative notion, and does not necessarily refer to spatial scale levels, but may also refer to functional scale levels as discussed above. The concept of transitions is supposed to be generic, that means that it potentially can be applied on various scales, both geographically and functionally. Thus a transition which happens at a lower (higher) scale level implies a certain dynamic pathway, which might also take place at a higher (lower) scale level. For example, an economic or demographic transition which occurred at a regional scale level might happen at a continental level or global level as well. This is also the power of the transition concept, that it may serve as a reference framework for a development path at a certain scale level which can be translated to a higher or lower scale level. The overall conclusion must be that, in the absence of a unifying scaling theory, in doing Integrated Assessment research, we are groping in the dark, but there are some candles that shed some light in the dark. Heuristic methods can be used as a provisional way out: either statistical up- and down-scaling techniques, concepts based on scale-related theories, or cross-scaling methods. Almost all heuristic methods are typical examples of trial-and-error methods, but nevertheless they are useful in the unruly practice. Analysing these multiscale methods, the conclusion must be that the bulk of the methods are topdown by nature rather than bottom-up. On the other hand it should be noticed that there is a growing interest in bottom-up approaches.
Conclusions and Recommendations An overall lesson to be learned is that the scaling problem is much more than a technical problem, and therefore should not be treated as such. Next to the common physical notion of scale, there is a social-cultural and institutional value component. Thus, in addition to the geographical dimensions of scale,
SCALING IN INTEGRATED ASSESSMENT 351
time and space, we need a third dimension, the so-called functional dimension. This dimension indicates the functional relations between agents, both individual and collective. How to represent this functional scale is not yet entirely clear, but one way of representing different functional scale levels for agents is to use a discretized multi-scale concept, which distinguishes between the macro-, meso- and micro-level. At the macro-level transnational agencies are operating, at the meso-level institutions and organisations, and at the microlevel individual agents. Because an overarching theory of how to deal with the three dimensions of scale is lacking, heuristic concepts and methods will continue to be used. Generally, we can divide these heuristics into statistical (and non-statistical) up- and down-scaling techniques, concepts derived from complex systems theory, and cross-scaling concepts. Each has its pros and cons, but further experimentation will hopefully shed light on better practices. We have discussed these heuristic methods as applied to IA-models and IA-scenarios. IA-models are structured along the lines of vertical and horizontal integration, and not along scaling structures. We all know that nature does not organise itself around grid cell patterns, but most IA-models still use grid cell patterns as an organising principle. In general, IA-modellers do not devote a substantial amount of time to multiple scaling. And when they do, they usually pick from the three heuristic methods available in dealing with multiple scales in IA-modelling: grid-cell based IA-models, cellular automata models and multiple-scale regression models. Unfortunately, these have been developed and applied in isolation from each other, representing different schools that hardly communicate with each other. But blending these heuristics, for instance the cellular automata models with the multiplescale regression models, which implies replacing correlation patterns by causal patterns in the latter, would already be a significant step forward. With regard to IA-scenarios the conclusion is that scaling is an underrated issue in IA-scenario development. The vast majority of scenarios that we screened operated on just one scale level. Two exceptions to this rule are the GEO-3 scenarios and the VISIONS scenarios. The IPCC SRES-scenarios operate at both the global and regional scale, but the connection between these scale levels is rather loose and rudimentary. The VISIONS projects resulted in European visions, achieved by the integration of scenarios across the European and regional scale level. The integration of European and regional scenarios was based on a pairwise intercomparison of driving forces, actors/sectors/ factors, management styles and future outlooks. In addition to these examples, we definitely need more of these scenario exercises in which multiple temporal and spatial scales are the starting point of the scenario analysis. The relations between uncertainty and scale is a largely uncultivated area and is at the frontier of IA-knowledge. From our preliminary analysis it follows that the identification of the nature of the uncertainty and the underlying sources of uncertainty is a prerequisite for analysing in more detail the coupling with
352 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
scales. Whereas in the case of structural uncertainties the linkage with scaling is only of secondary importance, in the case of uncertainty due to unreliability the relation with scaling is more obviously of importance. In general terms, the scaling issue is of vital importance for Integrated Assessment. An indication hereof is that the nature of an IA-problem may change when considered from a different scale level. Our scaling analysis also shows that emergent properties are of vital importance for IA-models, because it may arise from the interaction of submodels (modules) of the IAmodel. Nevertheless the attention for scaling in Integrated Assessment seems inversely proportional to its importance. The time is therefore ripe to develop a research agenda around the issue of scaling in science, and in particular in Integrated Assessment. In this agenda, fundamental, theoretical scaling subjects, working towards new theories or transformation of existing theories, and practical, technical handwork activities, applying existing methods and concepts deserve a place. The improvement of existing tools and methods should go hand in hand with the development of new theories and methods. There is also a need for a common language, which is cross-disciplinary. We have found different notions, definitions and interpretations of scaling in different disciplines; economists have a different scaling language than social geographers and IA-modellers have a different interpretation of scaling than the participatory IA-researchers. Overall, the added value of putting scaling issues high on the IA-research agenda is that it allows for leaving behind the paradigm that scale is merely a technical construct, realising that scale has a meaning to people and our society. In this sense the third dimension of scale, the functional one, is important to underline the relevance of specifying relations between human beings and institutions. Taking this new scaling paradigm into account, every IA-study should implicitly and explicitly pay attention to this broader interpretation of scale and its implications.
References 1.
2.
Alcamo, J., R. Leemans, et al., eds, 1998. “Global change scenarios of the 21st century”. Results from the IMAGE 2.1 Model. London: Elsevier Science. Cash, D.W. and Moser, S.C., 2000. “Information and DecisionMaking Systems for the Effective Management of Cross-Scale Environmental Problems.” Paper Presented for the Workshop Local Response to Global Change: Strategies of Information Transfer and Decision-Making for Cross-Scale Environmental Risks. Harvard, U.S.A: Harvard University.
SCALING IN INTEGRATED ASSESSMENT 353
3.
4. 5. 6.
7.
8.
9. 10.
11.
12.
13.
14.
15.
16.
Conte, R. and Castelfranchi, C., 1999. “From conventions to prescriptions: towards an integrated view of norms.” Artificial Intelligence and Law, 7: 323–340. Conte, R. and Castelfranchi, C., 1995. Cognitive and Social Action. London, UK: UCL Press Limited. Cosgrove, W. J. and Rijsberman, F.R., 2000. World Water Vision: Making Water Everybody’s Business. London: Earthscan Publications. Engelen, G., R. White, I. Uljee and P. Drazan, 1995. “Using cellular automata for integrated modelling of socio-environmental systems.” Environmental Monitoring and Assessment,34: 203–214. Geels, F.W. and Kemp, R., 2000. “Transitions from a sociotechnical perspective.” Report for the Ministery of the Environment, University of Twente & MERIT. University of Maastricht. Gibson, C.C., Ostrom, E., and Ahn, T.K., 2000. “The concept of scale and the human dimensions of global change: a survey.” Ecological Economics, 32: 217–239. IPCC, 2000. Intergovernmental Panel on Climate Change. Emission Scenarios, 2000. Cambridge, U.K: Cambridge University Press. Jaeger, C., Renn, O., Rosa, E.A. and Webler, T., 1998. Decision analysis and rational action. in Rayner, S. and Malone, E., 1998. Human Choice and Climate Change. Vol. 3. Tools for Policy Analysis. U.S.A.: Batelle Press. Jaeger, J. (ed.), 2000. “The EFIEA-Workshop on Uncertainty.” Workshop organised by the European Forum on Integrated Environmental Assessment (EFIEA). Baden bei Wien, Austria, July 10–18, 1999. Krywkow, J., Valkering, P., Rotmans, J. and van der Veen, A., 2002 Agent-based and Integrated Assessment Modelling for incorporating social dynamics in the management of the Meuse in the Dutch Province of Limburg. in Rizzoli, A.E. and Jakeman, J., 2002. Integrated Assessment and Decision Support Proceedings of the 1st Biennial Meeting of the International Environmental Modelling and Software Society. Vol. 2: 263–268. IEMS, 24–27 June, 2002, Lugano, Switzerland. Martens, W.J.M. and Rotmans, J. (eds.), 2002. Transitions in a globalising world. Lisse, the Netherlands: Swets und Zeitlinger Publishers. Ostrom, E., Gibson, C. and Ahn, T.K., 2000. “The concept of scale and the human dimensions of global scale: a survey.” Ecological Economics, 32: 217–239. Peterson, G.D., 2000. “Scaling ecological dynamics: self-organisation, hierarchical structure and ecological resilience.” Climatic Change, 44: 291–309. Raskin, P., Gallopin, G., Hammond, A., and Swart, R., 1998. Bending the Curve: Toward Global Sustainability. Stockholm: Stockholm Environment Institute.
354 SCALING IN INTEGRATED ASSESSMENT: PROBLEM OR CHALLENGE?
17. Rizzoli, A.E. and Jakeman, J., 2002. “Integrated Assessment and Decision Support proceedings of the 1st biennial meeting of the International Environmental Modelling and Software Society.” vol. 1. IEMS, 24–27 June 2002. Lugano, Switzerland. 18. Root, T.R. and Schneider, S.H., 1995. “Ecology and climate: research strategies and implications.” Science, 269: 334–341. 19. Rotmans, J. and de Vries, H.J.M., 1997. Perspectives on Global Change: the TARGETS Approach. Cambridge, UK: Cambridge University Press. 20. Rotmans, J., 1998. “Methods for Integrated Assessment: the challenges and opportunities ahead.” Environmental Modelling and Assessment, 3: 155–179. 21. Rotmans, J., Anastasi, C., van Asselt, M.B.A., Greeuw, S., Mellors, J., Peters, S., Rothman, D., 2000. “VISIONS for a Sustainable Europe.” Futures, 32: 809–831. 22. Rotmans, J., Kemp, R. and van Asselt, M.B.A., 2001. “More evolution than revolution: transition management in public policy.” Foresight, 3:15–32. 23. Rotmans, J. and van Asselt, M.B.A., 2001. “Uncertainty management in Integrated Assessment Modelling: towards a pluralistic approach.” Environmental Monitoring and Assessment, 69: 101–130, June 2001. 24. Rotmans, J., van Asselt, M.B.A. and Rothman, D., 2003. “VISIONS on the future of Europe.”. Lisse, the Netherlands: Swets und Zeitlinger Publishers: to appear in 2003. 25. UNEP, 2002. United Nations Environment Programme. Global Environmental Outlook 3.” London, UK: Earthscan Publications Ltd. 26. Van Asselt, M.B.A., 2000. Perspectives on Uncertainty and Risk: the PRIMA approach to Decision Support. Dordrecht, the Netherlands: Kluwer Academic Publishers.. 27. Van Asselt, M.B.A. and Rotmans, J., 2000. “Uncertainty in Integrated Assessment Modelling: from Positivism to Pluralism.” Climatic Change, 54: 75–102. 28. Van Notten, P. and Rotmans, J., 2001. “The future of scenarios.” Scenario & Strategic Planning 3: 4–8. April/May 2001. 29. Van Notten, P., forthcoming. Early Detection of Upcoming Disruptions. Copenhagen: European Environment Agency. 30. Van der Veen, A., 1999. “Paradise Regained: about environmental and spatial quality.” Inaugural Speech: University of Twente. January 14, 1999. 31. Van der Veen, A. and Rotmans, J., 2001. “Dutch perspectives on agents, regions and land use change.” Environmental Modeling and Assessment, 6: 83–86. 32. Verburg, P.H., 2000. Exploring the Spatial and Temporal Dynamics of Land Use: with Special Reference to China. PhD-dissertation. Wageningen University.
INTEGRATED ASSESSMENT STUDIES 1.
Transitions in a globalising world. Edited by Pim Martens and Jan Rotmans 2002. ISBN 90 265 1921 4 (hardbound)
2.
Scaling in Integrated Assessment. Edited by Jan Rotmans and Dale S. Rothman 2003. ISBN 90 265 1947 8 (hardbound)