Operational Oceanography in the 21st Century
Andreas Schiller€•Â€Gary B. Brassington Editors
Operational Oceanography in the 21st Century
1 3
Editors Dr. Andreas Schiller Centre for Australian Weather and Climate Research CSIRO GPO Box 1538 Hobart 7001, Tasmania Australia
[email protected]
Dr. Gary B. Brassington Centre for Australian Weather and Climate Research Bureau of Meteorology PO Box 1289 Melbourne 3001, Victoria Australia
[email protected]
ISBN 978-94-007-0331-5â•…â•…â•…â•… e-ISBN 978-94-007-0332-2 DOI 10.1007/978-94-007-0332-2 Springer Dordrecht Heidelberg London New York Library of Congress Control Number: 2011925930 All Rights Reserved for Chapters 21 and 22 © Springer Science+Business Media B.V. 2011 No part of this work may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, microfilming, recording or otherwise, without written permission from the Publisher, with the exception of any material supplied specifically for the purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the work. Cover illustration: Daily mean sea surface height anomaly for the 28th July 2009 as estimated by the operational BLUElink ocean prediction system. Red colors indicate positive height anomalies whilst blue colors represent negative height anomalies. Anomalies represent the estimated sea surface height relative to the model’s dynamic topography of a mean from a multi-year integration forced by reanalysis winds. Shown in the image is the east Indian Ocean with anticyclonic and cyclonic eddies in the mid-latitudes and the South Equatorial Current in the tropics that derives volume flux from the western warm pool in the Pacific Ocean via the Indonesian Throughflow. Image produced by Dr. Justin Freeman. Cover design: deblik, Berlin Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
In the mid 1990s the research community and operational agencies saw an emerging opportunity for near-realtime ocean forecasts similar to those produced in Numerical Weather Prediction: combining numerical models and observations via data assimilation in order to provide ocean prediction products on various space and time scales. This development was facilitated through an international framework provided by the Global Ocean Data Assimilation Experiment (GODAE). GODAE aimed at advancing ocean data assimilation by synthesizing satellite and in-situ observations with state-of-the art models of global ocean circulation. In the past few years ocean forecasting has matured to a stage where many nations have implemented global and basin-scale ocean analyses and short-term forecast systems that provide routine products to the oceanographic community serving a variety of applications in areas such as marine environmental monitoring and management, ocean climate, defense and industry applications. The authors within this book provide an up to date description of the major components of ocean analyses and forecasting systems. The chapters cover a wide range of topics including, but not limited to, scientific advances and challenges in ocean forecasting, the associated descriptions of the forecasting systems and end user applications. This integrated view of ocean forecasting is the end result of an International Summer School for Observing, Assimilating and Forecasting the Ocean held in Perth, Australia, in January 2010. The flow diagram (Fig. 1) captures the main functional components and sources of inputs implemented under GODAE and required by any ocean forecasting system. These are: the data and product servers, the assimilation centres and the users of the outputs. It captures many of the interactions required to ensure or enhance the quality of the systems and their outputs (Bell et al. 2009). The measurement network and data assembly and processing centres provide the main inputs to the assimilation centres (top centre and right of Fig. 1). In this book Le Traon (2011), Josey (2011), Ravichandran (2011) and Oke and O’Kane (2011) provide concise overviews of the in situ and satellite components of the current global observing system and discuss the continuing work required to sustain and optimise it. The GODAE-sponsored Global High Resolution Sea Surface Temperature (GHRSST) project has resulted in a coordinated network of centres disseminating v
vi
Preface
Fig. 1↜渀 Functional components of operational ocean forecasting systems developed during GODAE
SST data in real-time in a common format to agreed standards from a wide range of microwave and infra-red instruments on polar orbiting and geostationary satellites. Cummings (2011) summarizes the substantial achievements during GODAE in the quality control of observational data and the joint use of in situ and satellite data. Dombrowsky (2011) provides an overview of progress in the capabilities of ocean prediction systems and data and product servers (see left of middle row of Fig. 1). Brassington (2011) examines key properties of the real-time system and their impact on operational system design. They provide an overview of the underpinning concepts and technologies which enable the observed data to be discovered, visualised, downloaded, intercompared and analysed all over the world. Progress in ocean data assimilation (the central item in Fig. 1) is described in a number of papers (Zaron 2011; Moore 2011; Brasseur 2011). The tables and descriptions in Dombrowsky (2011) and Zhu (2011) provide a useful overview of the present modelling and assimilation components of the major systems involved in coastal and basin-scale ocean forecasting. Most centres now operate systems with 1/10° or finer horizontal grid spacing; have a global capability; make use of community ocean models (e.g. HYCOM, MOM4 or NEMO; see Barnier et al. 2011, Chassignet 2011; Hurlburt et al. 2011 and Matear 2011); and assimilate in situ profile data, altimeter data and some form of surface temperature data. Martin (2011) illustrates the skill of the high-resolution systems in forecasting sea surface currents and sea surface temperature.
Preface
vii
Product assessments and interactions with research users (lower right area of Fig. 1) have been key activities since the inception of operational ocean forecasting systems. Hernandez (2011) describes the procedures developed to intercompare forecasts produced by different centres and illustrates the insights these can give into the performance of the systems. Alves et al., (2011) describe some examples of how systems developed for ocean state estimation have been used for climate variability/seasonal forecasting research and how intercomparisons of results from the systems are being used to assess the consistency and uncertainty of the state estimates. Oke and O’Kane (2011) summarise results gathered by observing system design research and outline the exciting prospects for future work. They illustrate the complementarity of the SST, altimeter and profile data for mesoscale prediction, and present statistics on the dependence of the accuracy of 7-day forecasts, real-time analyses and delayed mode analyses on the availability of altimeter data. Wilkin et al. (2011) summarise the wide-ranging coastal applications in ocean forecasting. Finally Matear and Jones (2011) outline a number of categories of potential ecological and biogeochemical applications and discuss the challenges this area poses to the fidelity of the physical models and assimilation schemes and to the measurement technologies. The lower left part of Fig. 1 depicts the information flows to application centres (also known as downstream services) and users. Barras (2011) and King et al. (2011) describe the legal framework and use of ocean forecasting outputs in monitoring and prediction of marine pollution (such as oil spills) and the value of GODAE forecasts for safety and effectiveness of operations at Sea. Woodham (2011) provides examples of the wide variety of information and tactical decision aids generated using GODAE products to assist Naval operations. Ivey (2011) summarises the current operational use of upper ocean heat content information to forecast the intensity of tropical cyclones and current research in this area. Fundamentals and applications of sea-level variability, surface waves and tsunamis are discussed by Pattiaratchi (2011) and Greenslade and Tolman (2011). Huckerby (2011) and Mann (2011) provide an introduction to and overview about the emerging field of ocean renewable energy and the corresponding need for ocean state information to determine the available energy resources as well as the impact of ocean renewable energy on the physical environment. The editors gratefully acknowledge the students and lecturers listed below who actively contributed to the success of the summer school as well as the first round of reviews of the draft manuscripts. Primary support for this summer school was provided by the National Oceanographic and Atmospheric Administration (NOAA), USA, the Bureau of Meteorology, Australia, and the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Australia. This support is gratefully acknowledged. The editors would like to thank the speakers for their contributions during the summer school and for providing their manuscripts within a tight time frame. We also express our appreciation to the GODAE OceanView Science Team who contributed in numerous ways to the success of the summer school and this book. We
viii
Preface
thank Charitha Pattiaratchi, Diana Greenslade, Tim Pugh, Roger Proctor, Bernard Barnier, Clothilde Langlais, Fabrice Hernandez, Marie Drevillon and Andy Taylor for preparing and conducting an excellent set of student exercises. We thank all the attendees (see list in Appendix) for participating actively in the lectures and in the lecture review process. Finally, our thanks go to Val Jemmeson, Nick D’Adamo and Charitha Pattiaratchi who spent considerable time with the logistics of the summer school. A special thank goes to Denise McMullen for her help in coordinating the editorial process of the manuscripts. Australia 28 May 2010
Andreas Schiller Gary B. Brassington
List of Lecturers and Students
Lecturers Dr. Oscar Alves╇ CAWCR, Australia, e-mail:
[email protected] Dr. Bernard Barnier╇ HMG, France,
[email protected] Ms. Kathryn Barras╇ Minter Ellison, Australia e-mail:
[email protected] Dr. Pierre Brasseur╇ HMG, France, e-mail:
[email protected] Dr. Gary B. Brassington╇ CAWCR, Australia, e-mail:
[email protected] Prof. Eric P. Chassignet╇ FSU, USA, e-mail:
[email protected] Dr. James A. Cummings╇ NRL, USA, e-mail:
[email protected] Dr. Eric Dombrowsky╇ Mercator, France e-mail:
[email protected] Dr. Maria Drevillon╇ Mercator, France, e-mail:
[email protected] Dr. Diana Greenslade╇ CAWCR, Australia, e-mail:
[email protected] Dr. Fabrice Hernandez╇ Mercator, France e-mail:
[email protected] Dr. John Huckerby╇ New Zealand, e-mail:
[email protected] Dr. Harley E. Hurlburt╇ NRL, USA, e-mail:
[email protected] Dr. Gregory N. Ivey╇ UWA, Australian, e-mail:
[email protected] Dr. Simon A. Josey╇ NOC, United Kingdom, e-mail:
[email protected] Dr. Brian King╇ APASA, Australia, e-mail:
[email protected] Dr. Clothilde Langlais╇ CSIRO, Australia, e-mail:
[email protected] Dr. Pierre Yves Le Traon╇ IFREMER, France e-mail:
[email protected] Dr. Laurence D. Mann╇ Carnegie, Australia, e-mail:
[email protected] Dr. Matthew Martin╇ UK MetOffice, United Kingdom e-mail:
[email protected] Dr. Richard J. Matear╇ CAWCR, Australia, e-mail:
[email protected] Prof. Andrew M. Moore╇ UCSC, USA, e-mail:
[email protected] Prof. Charitha Pattiaratchi╇ UWA, Australia, e-mail:
[email protected] Dr. Roger Proctor╇ UTAS, Australia, e-mail:
[email protected] Mr. Tim Pugh╇ CAWCR, Australia, e-mail:
[email protected] ix
x
List of Lecturers and Students
Dr. Muthalagu Ravichandran╇ INCOIS, India, e-mail:
[email protected] Dr. Andreas Schiller╇ CSIRO, Australia, e-mail:
[email protected] Dr. Hendrik Tolman╇ NOAA, USA, e-mail:
[email protected] Mr. Geoff Wake╇ Woodside, Australia, e-mail:
[email protected] Assoc. Prof. John L. Wilkin╇ Rutgers University, USA e-mail:
[email protected] Comm. Robert Woodham╇ RAN, Australia, e-mail:
[email protected] Dr. Edward D. Zaron╇ Portland, USA, e-mail:
[email protected] Dr. Jiang Zhu╇ IAP, China, e-mail:
[email protected]
Students Amjadali Amanda╇ Australia, e-mail:
[email protected] Andutta Fernando╇ Australia, e-mail:
[email protected] Backeberg Bjorn╇ South Africa/Germany, e-mail:
[email protected] Ban Natalie╇ Australia, e-mail:
[email protected] Ban Stephen╇ Australia, e-mail:
[email protected] Bean Richard╇ Australia, e-mail:
[email protected] Beck Elise╇ USA, e-mail:
[email protected] Bluteau Cynthia╇ Australia, e-mail:
[email protected] Brushett Ben╇ Australia, e-mail:
[email protected] Cheah Wee╇ Australia, e-mail:
[email protected] Choi Byoung╇ Korea, e-mail:
[email protected] Choukroun Severine╇ Australia, e-mail:
[email protected] Desportes Charles╇ France, e-mail:
[email protected] Divakaran Prasanth╇ Australia, e-mail:
[email protected] Downes Stephanie╇ USA, e-mail:
[email protected] Duchez Aurelle╇ France, e-mail:
[email protected] Durrant Tom╇ Australia, e-mail:
[email protected] Exarchou Eleftheria╇ Germany, e-mail:
[email protected] Fernandez Mariana╇ Uraguay, e-mail:
[email protected] Ford David╇ England, e-mail:
[email protected] Furner Rachel╇ England, e-mail:
[email protected] Garvey Michael╇ Australia, e-mail:
[email protected] Gasparin Florent╇ France/New Cal., e-mail:
[email protected] Geard Simon╇ Australia Hanson Christine╇ Australia, e-mail:
[email protected] Hertzel Yasha╇ Australia, e-mail:
[email protected] He Zhongjie╇ China, e-mail:
[email protected] Ishizaki Shiro╇ Japan, e-mail:
[email protected] Joseph Sudheer╇ India, e-mail:
[email protected] Law Chune Stephan╇ France, e-mail:
[email protected] Lesser Giles╇ Australia, e-mail:
[email protected]
List of Lecturers and Students
Luz-Clara Moira╇ Argentina, e-mail:
[email protected] Macdonald Helen╇ Australia, e-mail:
[email protected] Meinvielle Marion╇ France, e-mail:
[email protected] Monteiro Igor╇ Brazil, e-mail:
[email protected] Morales Ruben╇ Mexico, e-mail:
[email protected] Mulet Sandrine╇ France, e-mail:
[email protected] O’Callaghan Joanne╇ New Zealand, e-mail:
[email protected] O’Loughlin Julian╇ Australia, e-mail:
[email protected] Prakya Shreeram╇ India, e-mail:
[email protected] Prandi Pierre╇ France, e-mail:
[email protected] Rahaman Hasibur╇ India, e-mail:
[email protected] Rayson Matthew╇ Australia, e-mail:
[email protected] Rozman Polona╇ Germany, e-mail:
[email protected] Rousseaux Cecile╇ Australia, e-mail:
[email protected] Shimizu Kenji╇ Australia, e-mail:
[email protected] Shu Yeqiang╇ China, e-mail:
[email protected] Stevenson Kate╇ Australia, e-mail:
[email protected] Subramanian Aneesh╇ USA, e-mail:
[email protected] Summons Nicholas╇ Australia, e-mail:
[email protected] Swart Neil╇ South Africa, e-mail:
[email protected] Swart Sebastian╇ South Africa, e-mail:
[email protected] Taebi Sohelia╇ Australia, e-mail:
[email protected] Tanajura Clemente╇ Brazil, e-mail:
[email protected] Taylor Andy╇ Australia, e-mail:
[email protected] Teixeira Carlos╇ Australia, e-mail:
[email protected] Tonbol Kareem╇ Egypt, e-mail:
[email protected] Usui Norihisa╇ Japan, e-mail:
[email protected] Volkov Denis╇ Canada, e-mail:
[email protected] Wakamatsu Tsuyoshi╇ Canada, e-mail:
[email protected] Wedd Robin╇ Australia, e-mail:
[email protected] Welhena Thisara╇ Australia, e-mail:
[email protected] Weller Evan╇ Australia, e-mail:
[email protected] Wood Julie╇ Australia, e-mail:
[email protected] Xie Jiping╇ China, e-mail:
[email protected] Zheng Fei╇ China, e-mail:
[email protected] Zhou Wei╇ China, e-mail:
[email protected]
xi
Contents
Part Iâ•… Introduction 1â•…Ocean Forecasting in the 21st Century ����������������������������������尓��������������� â•…â•… 3 Andreas Schiller Part IIâ•… Oceanographic Observing System 2â•…Satellites and Operational Oceanography ����������������������������������尓���������� ╅╇ 29 Pierre-Yves Le Traon 3â•…In-Situ Ocean Observing System ����������������������������������尓������������������������� â•… ╇ 55 Muthalagu Ravichandran 4â•…Ocean Data Quality Control ����������������������������������尓�������������������������������� ╇╅ 91 James A. Cummings 5â•…Observing System Design and Assessment ����������������������������������尓��������� â•… 123 Peter R. Oke and Terence J. O’Kane Part IIIâ•… Atmospheric Forcing and Waves 6â•…Air-Sea Fluxes of Heat, Freshwater and Momentum �������������������������� â•… 155 Simon A. Josey 7â•…Coastal Tide Gauge Observations: Dynamic Processes Present in the Fremantle Record ����������������������������������尓������������������������� â•… 185 Charitha Pattiaratchi 8â•…Surface Waves ����������������������������������尓������������������������������������尓������������������� â•… 203 Diana Greenslade and Hendrik Tolman
xiii
xiv
Contents
╇ 9╇Tides and Internal Waves on the Continental Shelf ���������������������������� ╅ 225 Gregory N. Ivey Part IV╅ Modelling 10╅Eddying vs. Laminar Ocean Circulation Models and Their Applications ����������������������������������尓������������������������������������尓��������������������� ╅ 239 ╇ Bernard Barnier, Thierry Penduff and Clothilde Langlais 11╅Isopycnic and Hybrid Ocean Modeling in the Context of GODAE ���� ╅ 263 ╇ Eric P. Chassignet 12╅Marine Biogeochemical Modelling and Data Assimilation ��������������� ╅ 295 ╇ Richard J. Matear and E. Jones Part V╅ Data Assimilation 13╅Introduction to Ocean Data Assimilation ����������������������������������尓��������� ╅ 321 ╇ Edward D. Zaron 14╅Adjoint Data Assimilation Methods ����������������������������������尓������������������ ╅ 351 ╇ Andrew M. Moore 15╅Ensemble-Based Data Assimilation Methods ����������������������������������尓��� ╅ 381 ╇ Pierre Brasseur Part VI╅ Systems 16╅Overview Global Operational Oceanography Systems ��������������������� ╅ 397 ╇ Eric Dombrowsky 17╅Overview of Regional and Coastal Systems ����������������������������������尓������ ╅ 413 ╇ Jiang Zhu 18╅System Design for Operational Ocean Forecasting ��������������������������� ╅ 441 ╇ Gary B. Brassington 19╅Integrating Coastal Models and Observations for Studies of Ocean Dynamics, Observing Systems and Forecasting ���������������� ╅ 487 ╇John L. Wilkin, Weifeng G. Zhang, Bronwyn E. Cahill and Robert C. Chant 20╅Seasonal and Decadal Prediction ����������������������������������尓����������������������� ╅ 513 ╇ Oscar Alves, Debra Hudson, Magdalena Balmaseda and Li Shi
Contents
xv
Part VIIâ•… Evaluation 21â•…Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example ����������������������������������尓������������������������������������尓���� â•… 545 ╇Harley E. Hurlburt, E. Joseph Metzger, James G. Richman, Eric P. Chassignet, Yann Drillet, Matthew W. Hecht, Olivier Le Galloudec, Jay F. Shriver, Xiaobiao Xu and Luis Zamudio 22â•…Ocean Forecasting Systems: Product Evaluation and Skill �������������� â•… 611 ╇ Matthew Martin 23â•…Performance of Ocean Forecasting Systems— Intercomparison Projects ����������������������������������尓����������������������������������� â•… 633 ╇ Fabrice Hernandez Part VIIIâ•… Applications, Policies and Legal Frameworks 24â•…Defence Applications of Operational Oceanography ������������������������� â•… 659 ╇ Robert Woodham 25â•…Applications for Metocean Forecast Data—Maritime Transport, Safety and Pollution ����������������������������������尓������������������������� â•… 681 ╇ Brian King, Ben Brushett, Trevor Gilbert and Charles Lemckert 26â•…Marine Energy: Resources, Technologies, Research and Policies ���� â•… 695 ╇ John Huckerby 27â•…Application of Ocean Observations & Analysis: The CETO Wave Energy Project ����������������������������������尓������������������������������������尓������ â•… 721 ╇ Laurence D. Mann 28â•…International Marine Environmental Law (Oil Pollution) ��������������� â•… 731 ╇ Kathryn Barras Index ����������������������������������尓������������������������������������尓������������������������������������尓����� â•… 741
Part I
Introduction
Chapter 1
Ocean Forecasting in the 21st Century From the Early Days to Tomorrow’s Challenges Andreas Schiller
Abstract╇ This article provides a brief introduction to the history of oceanography with a focus on elements that laid the scientific foundation of ocean forecasting, i.e. ocean observations, ocean general circulation models and data assimilation tools. It then describes the scientific achievements of the first phase of internationally coordinated efforts in the development of global and basin-scale operational ocean forecasting systems during the Global Ocean Data Assimilation Experiment (1997– 2008). This is followed by a description of the challenges in ocean forecasting in the twenty-first century and a summary and conclusion. This article represents an introduction to the modelling, data assimilation and observing system topics discussed in more detail in the subsequent chapters of this book.
1.1â•…Brief History of Oceanography The focus of this article and this book is on the two branches of oceanography that deal with • Physical oceanography, or marine physics, that studies the ocean’s physical attributes including temperature-salinity structure, mixing, waves, internal waves, surface tides, internal tides, and currents, acoustical and optical oceanography; • and, to some extent, biogeochemical oceanography which involves the scientific study of the chemical, physical, geological, and biological processes and reactions that govern the composition of the natural environment, and the cycles of
Centre for Australian Weather and Climate Research—A partnership between CSIRO and the Bureau of Meteorology; CSIRO Wealth from Oceans National Research Flagship, Hobart, Tasmania, Australia. A. Schiller () CSIRO Marine and Atmospheric Research, Castray Esplanade, GPO Box 1538, Hobart 7001, Tasmania, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_1, ©Â€Springer Science+Business Media B.V. 2011
3
4
A. Schiller
matter and energy that transport the Earth’s chemical components in time and space. Biogeochemical oceanography focuses on chemical cycles which are either driven by or have an impact on biological activity such as carbon, nitrogen, and phosphorus cycles. We first describe briefly the general history that laid the foundation of ocean forecasting. The focus here is not on a comprehensive description of the whole science of oceanography but to focus on those components that are underpinning today’s ocean forecasting systems, in particular the development of an ocean observing system and hydrodynamic numerical modelling. Man first began to acquire knowledge of the waves, tides and currents of the seas and oceans in pre-historic times. During The Age of Discovery (approximately late 1400s to early 1700s) exploration of the oceans was primarily for cartography and mainly limited to its surfaces, although depth soundings were taken by lead line. During the beginning of the scientific voyages (late 1700s to twentieth century) in 1769 Benjamin Franklin published one of the earliest maps of the Gulf Stream (Fig.€1.1).
Fig. 1.1↜渀 Map of the Gulf Stream created by Benjamin Franklin. The Gulf Stream is depicted as the dark gray swath that runs along the east coast of what is now the United States. (Franklin 1769, Courtesy NOAA Photo Library)
1â•… Ocean Forecasting in the 21st Century
5
One of the most famous voyages of discovery of this time began in 1768 when HMS Endeavour left Portsmouth, England, under the command of Captain James Cook. Over 10 years Cook led three world-encircling expeditions and mapped many countries, including Australia, New Zealand and the Hawaiian Islands. He was an expert seaman, navigator and scientist who made observations wherever he went. James Rennell and John Purdy wrote the first scientific textbooks about currents in the Atlantic and Indian oceans during the late eighteenth and at the beginning of the nineteenth century (e.g. Rennell and Purdy 1832). The steep slope beyond the continental shelves was not discovered until 1849. Matthew Fontaine Maury’s Physical Geography of the Sea (Fig.€1.2) was the first textbook of oceanography based on his work as superintendant of the Depot of Charts and Instruments of the Navy Department in Washington D.C. (Maury 1855).
Fig. 1.2↜渀 Matthew Maury: “The Physical Geography of the Sea,” which is credited as “the first textbook of modern oceanography.” (Maury 1855)
6
A. Schiller
Fig. 1.3↜渀 Ocean surface currents around Australia from Black and Hall’s Atlas of the World published by A. & C. Black, Edinburgh (1865)
The first comprehensive maps that showed the surface circulation of the global oceans with reasonable accuracy were published by A. and C. Black in 1865 (Fig.€1.3). In 1871, under the recommendations of the Royal Society of London, the British government sponsored an expedition to explore the world’s oceans and conduct scientific investigations. Modern oceanography began with the Challenger Expedition between 1872 and 1876, when Charles Wyville Thompson and Sir John Murray launched the Challenger Expedition. It was the first expedition organized specifically to gather data on a wide range of ocean features, including ocean temperatures, seawater chemistry, currents, marine life, and the geology of the seafloor. They took water samples and temperature measurements, recorded currents and barometric pressures and collected bottom samples. The results of this expedition were published in 50 volumes covering biological, physical and geological aspects (Thompson et€al. 1880–1895). In 1893 Norwegian scientist Fridtjof Nansen allowed his ship Fram to be frozen in the Arctic ice. As a result he was able to collect valuable oceanographic, magnetic, and meteorological information in the Arctic. The rest of his career was equally as distinguished including the invention of a water-sampling bottle that permitted
1â•… Ocean Forecasting in the 21st Century
7
isolation of water samples from various depths to measure temperature, salinity and other parameters. Other European and American nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, the Albatros was built in 1882. The four-month 1910 North Atlantic expedition headed by Sir John Murray and Johan Hjort was at that time the most ambitious research oceanographic and marine zoological project ever, and led to the classic book The Depths of the Ocean (Murray and Hjort 1912). At the beginning of the Age of Modern Oceanography (1900s to mid twentieth century) the first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the Meteor expedition surveyed the Mid-Atlantic Ridge and gathered 70,000 ocean depth measurements using an echo sounder. Virtually all civilian ocean research ceased in 1939 with the outbreak of World War II, when scientific resources were mobilised. However, many advances were made in instrumentation, and our understanding of the ocean was greatly improved. For example, there were major advances in predicting wave conditions (important for amphibious invasions). Mapping features of ocean basins was greatly expanded to improve the ability to detect submarines. In 1942, Sverdrup et€al. (1942) published The Ocean which was a major landmark in oceanography. The nineteenth and twentieth century also saw major progress towards quantitative descriptions of observed phenomena. Examples of key areas of progress are (some of which are tightly linked to progress in meteorology): • the rotation of the Earth and associated impact on ocean currents (Coriolis 1835); • the effect of winds on the ocean-atmosphere interface (Ekman 1905); and • the development of vorticity theories and theorems for the ocean as an extension to Newton’s law in a rotating fluid (Ertel 1942; Sverdrup 1947). This enhanced capability to describe the ocean within a mathematical framework allowed the development of numerical models. Consequently, from the 1970s onwards there has been increased emphasis on the application of computers for oceanography to allow numerical simulations and predictions of the state of the ocean. The Mid-Ocean Dynamics Experiment (MODE) was one of the first large-scale and extensively instrumented field experiments carried out by physical oceanographers. Conducted in two phases between November 1971 and July 1973, the experiment explored the role of mesoscale eddy motions in the dynamics of general oceanic circulation (mesoscale eddies are at the centre of attention in today’s largescale ocean forecasting systems). The 1970s and 1980s also saw the development and first applications of socalled inverse methods to oceanographic data (e.g. Wunsch 1978). These methods can be interpreted as simple data assimilation tools that paved the way for the development of more complex data assimilation and model initialization tools used nowadays in ocean forecasting systems and often derived from numerical weather prediction applications.
8
A. Schiller
The Tropical-Ocean-Global-Atmosphere (TOGA) Program began in 1985 and was a ten-year research effort to investigate the global atmospheric response to the coupled ocean-atmosphere forcing from the tropical regions. It was among the first large-scale programs that addressed the predictability of the coupled tropical oceans and global atmosphere by drawing on observations and by recognizing the key role of models for understanding tropical air-sea interactions as a prerequisite for launching successful climate predictions into the future. In the 1980s the TAO/TRITON oceanographic buoy array was established in the Pacific to allow monitoring and ultimately prediction of El Niño events (http:// www.pmel.noaa.gov/tao/proj_over/taohis.html). Enhancements to the in situ and satellite observing system together with the first evolving model lead to the first successful ENSO prediction (Zebiak and Cane 1987). 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. WOCE was a component of the international World Climate Research Program, and aimed to establish the role of the world ocean in the Earth’s climate system. The WOCE field phase ran between 1990 and 1998 (Fig.€1.4), and was followed by an analysis and modelling phase that ran until 2002. The results are summarised in “Ocean Circulation and Climate: Observing and Modelling the Global Ocean” (Siedler et€al. 2001). Before the 1980s, when satellites became more commonly available, oceanographers were “data poor”. Since then, significant technological and scientific advances in satellite remote sensing provide near-real time measurements of sea surface height anomalies, SST and ocean colour. These key observations have, for the first time, enabled ocean forecasting applications (Fu and Cazenave 2001). The realisation of the network of 3,000 Argo profiling floats freely reporting temperature and salinity profiles to 2,000€m depth in a timely fashion has transformed the in situ ocean measurement network in the new millennia (Fig.€1.5). This allows, for the first time, continuous monitoring of the temperature, salinity, and velocity of the upper ocean, with all data being relayed and made publicly available within hours after collection. Based on significant advances in supercomputing technologies, the 1990s saw the emergence of the first large-scale eddy-resolving models (Semtner and Chervin 1992) and the first ocean-atmosphere coupled climate change projections (see, e.g. IPCC First Assessment Report 1990). More detailed accounts of the history of oceanography can be found in the published literature and, e.g. at http://core.ecu.edu/geology/woods/HISTOCEA.htm.
1.2â•…The Achievements of GODAE (1997–2008) As described in the previous paragraphs, over the last 20 years the global ocean observing system (↜in situ and remote sensing) has been progressively implemented and led to a revolution in the amount of data available for research and forecasting applications. The ocean observing system, primarily designed to serve climate
Fig. 1.4↜渀 WOCE Hydrographic Program One Time Survey (1990–1998) (http://woce.nodc.noaa.gov/wdiu/diu_summaries/whp/figures/whpot.htm)
1â•… Ocean Forecasting in the 21st Century 9
10
A. Schiller
60°N 30°N 0°N 30°S 60°S 60°E
120°E
180°
120°W
60°W
0°
Fig. 1.5↜渀 Status of global Argo float array, December 2009 (http://www.argo.ucsd.edu/)
research, is used as a backbone for most operational oceanography applications. Although significant progress has been made, sustaining the global ocean observing system remains a challenging task (Clark et€al. 2009). This recent progress in the global ocean observing system was complemented by advances in supercomputing technology, allowing the development and operational implementation of eddyresolving (~10€km) basin-scale ocean circulation models. The Global Ocean Data Assimilation Experiment (GODAE) was set up in 1997 with the aims of (i) demonstrating the feasibility and utility of global ocean monitoring and forecasting on the daily to weekly time scale and on eddy-resolving spatial scales and (ii) to assist in building the infrastructure for global operational oceanography (Smith and Lefebvre 1997; GODAE Strategic Plan 2000; Bell et€al. 2009). From its inception in 1997 to its conclusion in 2008, GODAE has had a major impact on the development of global operational oceanography capability. Global modelling and data assimilation systems have been progressively developed, implemented and inter-compared (Dombrowsky et€al. 2009; Cummings et€al. 2009; Hernandez et€al. 2009). In-situ and remote sensing data are now routinely assimilated in global and regional ocean models to provide an integrated description of the ocean state. Observation, analysis and forecast products are readily accessible through major data and product servers (Blower et€al. 2009). There has been increased attention to the development of products and services and the demonstration of their utility for applications such as marine environment monitoring, weather forecasting, seasonal and climate prediction, ocean research, maritime safety and pollution forecasting, national security, the oil and gas industry, fisheries management and coastal and shelf-sea forecasting (Davidson et€al. 2009; Hackett et€al. 2009; Jacobs et€al. 2009). GODAE as an experiment ended in 2008 having achieved most of its goals. It has been demonstrated that global ocean data assimilation is feasible and GODAE has made important contributions to the establishment of an effective and efficient infrastructure for global operational oceanography that includes the required observing systems, data assembly and processing centres, modelling and data assimilation centres and data and product servers.
1â•… Ocean Forecasting in the 21st Century
11
1.3â•…Key Future Research Priorities in Ocean Forecasting Although there are still major challenges to face (e.g. completing and sustaining the global ocean observing system being an obvious one), global operational oceanography now needs to transition from a demonstration to a permanent and sustained capability. Operational1 data and products are needed for most applications as well as for climate research. This is critical for applications which cannot develop without operational services. In parallel, continuous improvements of operational oceanography systems are needed to satisfy new requirements (e.g. for coastal zone and ecosystem monitoring and forecasting, climate monitoring).
1.3.1 The Challenges for the Next Decade Most national forecasting centres have or are now transitioning towards operational or pre-operational status. Ocean forecasting systems are also evolving to satisfy new requirements just mentioned and must benefit from scientific advances in ocean modelling and data assimilation. International collaboration and coordination of both operational and research activities related to ocean analysis and forecasting must continue during this sustained operational phase. The challenges and expectations are very demanding and can only be achieved through international collaboration. These main challenges and opportunities for the next decade are summarised below. During the last decade new pressing societal issues to which ocean analysis and forecasting can make substantial contributions have evolved. They are now quite diverse and are not limited to open ocean forecasts (although open ocean forecasts will continue to serve major applications areas). The most important are: • The use of data assimilation to provide integrated descriptions of the global ocean state (reanalyses) and to characterise and detect climate change in the ocean; • The application of ocean prediction techniques to the prediction of climate change (so-called decadal prediction); • The assessment and characterisations of specific sources of uncertainty in downscaling of climate and climate-change scenario simulations and predictions in studies of the impact of climate change in coastal regions (e.g. extreme events, flooding, ecosystems); • The development of improved atmospheric and climate forecasts (near coasts, hurricanes/tropical cyclones, monsoons, seasonal); • Real-time forecasting in near-shore / coastal waters (physics, biogeochemistry and ecosystems) and coupling between open ocean and coastal areas; 1╇ Following the GODAE Strategic Plan (2000), “operational” is used here “whenever the processing is done in a routine and regular way, with a pre-determined systematic approach and constant monitoring of performance. With this terminology, regular re-analyses may be considered as operational systems, as may be organized analyses and assessment of climate data”.
12
A. Schiller
• Ecosystem modelling and the development of ecosystem based management of marine resources (influence of physical transports and processes on marine life, modelling up to high trophic levels); • Marine environment monitoring in support of policies (e.g. European Marine Strategy). Continuous improvement of operational oceanography systems and the development of new capability is needed to address these new societal needs. This demands state-of-the art research leadership and calls for dedicated cooperation with international research programs such as CLIVAR, GEOTRACES, SOLAS and IMBER2. In the following paragraphs we address some of the main research topics that operational oceanography faces: high resolution physical modelling, downscaling, biogeochemical and ecosystem modelling, ocean-wave-atmosphere coupling, data assimilation and coupled data assimilation, error estimates, long-term reanalyses and use of new observations. Major developments in the decade will see a maturing of eddy-resolving dataassimilating models, and a stronger integration into coupled numerical weather prediction and climate modelling.
1.3.2 Ocean Modelling The science about turbulent closure schemes is now fairly mature, but there may still be surprises associated with subtle aspects of vertical mixing in the deep ocean that may have important consequences on long time scales. Vertical mixing is also critically important for biogeochemical cycles, because it controls the return of nutrients to the surface euphotic zone, and therefore the magnitude of primary production. Another area where there still is room for improvement concerns the exchanges of heat, momentum and freshwater across the ocean surface. Accuracy, resolution, and extent (in time ahead) of wind forecasts are the primary limiting factors for sea-state and surge forecasting. Likewise, sea surface heat exchange is clearly a determining factor in forecasting ocean mixed-layer depth and ice formation. In both cases, the need for dynamically coupled ocean-wave-iceatmosphere models is an essential element to improve atmospheric forcing. Coastal ocean modelling and forecasting is a major challenge for the scientific community because of the specific and rich dynamics of those regions, and because of the various couplings with the lower atmosphere and exchanges with the near-shore and offshore regions. These issues, needs and challenges have led to the development of a wide range of models of various types. Phenomena of interest include coastal current interactions, coastal meso-scale, tides and storm surges, 2╇ CLIVAR╛=╛World Climate Research Program (WCRP) project that addresses Climate Variability and Predictability. IMBER╛=╛Integrated Marine Biogeochemistry and Ecosystem Research; GEOTRACES╛=╛International study of the global marine biogeochemical cycles of trace elements and their isotopes; SOLAS╛=╛Surface Ocean Lower Atmosphere Study.
1â•… Ocean Forecasting in the 21st Century
13 14
60 N
13 58 N 12 56 N 11 52 N 10 50 N 9
48 N
8
44 N
12 W 10 W 8 W
6W
4W 2W
0E
7 °C
Fig. 1.6↜渀 Application of the 2-way grid-refinement software AGRIF to the Bay of Biscay, tested in the framework of the MERSEA project (Cailleau et al. 2008). The large-scale model is a 1/3° (Mercator grid) North Atlantic configuration of the NEMO Ocean general circulation model. The fine-scale model is regional configuration of NEMO at a resolution of 1/15° (Mercator grid). Both models are run simultaneously and interactively for years of simulation on either vector or massively parallel super computers. The computational surcharge induced by the 2-way coupling of the grids is very small (just a few percent). The regional model benefits from the smooth and regular behaviour of the large-scale model at its open boundaries. On longer time-scales, the largescale model benefits from the local improvements brought by the high resolution to the representation of the dynamics in the Bay of Biscay, especially the slope current. The above figure displays a sea surface temperature snapshot on 22 March 1996. One shall notice the fine-scale and the intense eddy field of the fine-grid model, but also the continuity at the limit between the two grids
tsunamis, shoreline change, coastal upwelling, river plumes and regions of freshwater influence, atmosphere-driven processes, surface waves, and sea ice (Fig.€1.6). Coastal ocean systems can have very high spatial gradients in both the vertical and horizontal, especially near river mouths, requiring the use in models of sophisticated mixing schemes, and high order numerics. The key constraints on the accuracy of these models now lie with the specification of input data (bathymetry, bottom roughness, lateral and surface forcing). In these shallow systems, and especially along exposed shorelines, wave-current interactions play an important role. Measuring and predicting exchanges between the underlying sediment and the water
14
A. Schiller
column is critical for coastal biogeochemistry, and is still a key challenge. Sediment models attempt to represent the effects of re-suspension and deposition of particulate material, and their interaction with the circulation, on suspended concentrations (turbidity, important for optical properties and hence primary production) and bed thickness and composition (geomorphology). Models of these processes are still under active development. With respect to biological processes, we are faced with the general problem of biogeochemical and ecosystem modelling (used here synonymously); namely, choosing the right level of abstraction and approximation in describing and predicting the structure and function of a complex system with many nested levels of complexity.
1.3.3 Initialisation and Forecasting There are still some significant challenges in the data-assimilation techniques themselves, and one can expect to see significant improvement there. The assimilation of observations into present-day ocean models is still far from being optimal. Improved estimates of the state of the physical ocean, marine ecosystems and oceanatmosphere interactions will rely upon new cross-cutting research directions in terms of both methods and operational implementations. In meteorology (the history of which predates the evolution of ocean forecasting), the implementation of data assimilation methodology has followed a progressive pace starting with optimal interpolation, followed by sequential approaches and today most larger NWP centres are increasingly investing in 4D-VAR variational approaches with a noticeable increase in interest in ensemble approaches. Operational oceanography is today at the stage of applying sequential approaches but variational methodologies are on the verge of being used, at least for seasonal forecast applications. Because of the specificities of oceanography (e.g. the mesoscale non-linearities) it is still unclear whether 4D-VAR is fully applicable (Luong et€ al. 1998) and further research must be undertaken in this direction. A promising way might be the hybridisation between variational and sequential approaches thus combining advantages of both methodologies (Robert et€al. 2006). However, 4D-VAR systems have not been comprehensively tested for highly non-linear applications. For instance, as we move to higher resolution and longer predictive time scales, the assumptions that underpin VAR systems (e.g. linearity in tangent-linear models) become less valid. The development of data assimilation into physical coastal ocean models has lagged behind its development in basin-scale models, and is still in its infancy. Current methods need to be tested and enhanced for coastal applications. Data assimilation in coastal models has a vital role to play, not only as a tool to provide short-term forecasts, but more importantly for the rigour it brings to the analysis of model error, and to the design of observing systems (see the CSSWG White Paper, De Mey et€al. 2007 for a detailed account).
1â•… Ocean Forecasting in the 21st Century
15
Biogeochemical modelling and data assimilation are much less mature than physical modelling. Consequently, there is a strong need for both on-going development and validation of biogeochemical and, ultimately, ecosystem models. The impact of the physical models on the accuracy of the ecosystem models is of particular importance (e.g. Berline et€al. 2007). High horizontal and vertical resolution physical models are required to resolve the physical features that are critical to the ecosystem. Errors in physical models are problematic and can render outputs from ecosystem models meaningless. Vertical velocities are a particular example as they are critical for nutrient transport. In coastal areas correct representation of optical depth is also critical for primary production (Fig.€1.7). This requires accurate suspended sediment concentrations. These requirements for accuracy present a challenge for physical models. A major trend in environmental research in the coming decade will see the development of the next generation weather, climate, and Earth system monitoring, assessment, data-assimilation, and prediction systems (Shapiro et€al. 2008). These
Fig. 1.7↜渀 MODIS image of ocean colour off Australian NW shelf. The figure illustrates the complex processes acting in the coastal zone due to blending of different time/ space scales (e.g. ocean-shelf topographic interaction). Forecasting systems operating in such complex environments require sophisticated multi-scale (nested) models and scale-sensitive observing systems for accurate initialization (Courtesy: CSIRO Marine and Atmospheric Research)
16
A. Schiller
systems will no longer focus on individual components of the Earth system (such as the oceans) but aim at treating the complex physical and biogeochemical components as one system. Coupled data assimilation means that observations in one medium impact the state of the other medium. In 4D-VAR fully coupled assimilation means simultaneous minimization of the cost function of the component models, e.g. atmosphere and ocean. An example of a less complex system is coupled ocean-atmosphere modelling. Ultimately truly coupled physical-biogeochemical initialization systems need to be developed, whereby the ocean, sea-ice, land surface, and atmosphere are initialized in unison. Consequently, a key challenge in data assimilation over the next decade will be the development of data assimilation techniques for Earth system modelling that are fit-for-purpose for a wide range of applications, including ocean-atmosphere weather forecasting, seasonal-to-decadal and climate change prediction.
1.3.4 The Global Ocean Observing System Over the last 10 years, a global ocean observing system (↜in situ and remote sensing) has been progressively implemented. The system, primarily designed to serve climate research, is used as a backbone for most operational oceanography applications. Although significant progress has been made (e.g. Argo and Jason are outstanding successes), sustaining the global ocean observing system remains a challenging task (Freeland et€al. 2010; Wilson et€al. 2010). There is also a pressing need to develop further regional and coastal components and, as discussed above, to extend the measurement capabilities to biogeochemical parameters. This endeavour is clearly beyond the scope of ocean analysis and forecasting teams and involves major international programs or intergovernmental organizations (e.g. WMO and IOC through JCOMM, GOOS and GCOS, GEOSS, CEOS) and research programs (e.g. WCRP, IGBP and SOLAS)3. Nowadays, use is made of observations from satellites, autonomous floats, onshore devices (radar, tide gauges etc.), off-shore moorings, aircraft, AUVs (Autonomous Underwater Vehicles), VOS (Voluntary Observing Ships) and more. Especially in the coastal zone more and better observational data, extending over longer periods, are essential if modelling accuracy and capabilities are to be enhanced (Malone et€al. 2010). International collaboration is an obvious and valuable means of achieving this goal. While international funding supports some satellite programs (although most of these are still regarded as non-operational), synergistic in situ monitoring presently relies on national funding. Examples are the Argo profiling 3╇ WMO╛=╛World Meteorological Organisation; IOC╛=╛Intergovernmental Oceanographic Commission; JCOMM╛=╛Joint Committee for Oceanography and Marine Meteorology; GOOS╛=╛Global Ocean Observing System; GCOS╛=╛Global Climate Observing System; GEOSS╛=╛Global Earth Observation System of Systems; CEOS╛=╛Committee on Earth Observation Satellites; WCRP╛=╛World Climate Research Program; IGBP╛=╛International Geosphere-Biosphere Program.
1â•… Ocean Forecasting in the 21st Century
17
Fig. 1.8↜渀 Liverpool Bay Coastal Observatory in the Irish sea, indicating simultaneous multiparameter measurements and satellite AVHRR sea surface temperatures. (Courtesy Roger Proctor, Proudman Oceanographic Laboratory, UK)
floats, the TAO/TRITON array in the Pacific (USA and Japan), the PIRATA array in the Atlantic (France, USA and Brazil) and the IndOOS array in the Indian Ocean (India, USA and Japan). These basin-scale observing systems are subject to international coordination whereas design and implementation of coastal ocean observing systems are largely the responsibility of individual national efforts (Fig.€1.8). Despite the limited progress in implementing ocean biogeochemical observing systems there is an increasing user pull for enhanced ocean forecasting capability that includes information about physics, biogeochemistry and ultimately ecosystem components. The biogeochemical and physical systems interact on a variety of processes and scales. Most notable is the impact of biology and associated attenuation depth of light on solar shortwave penetration and thus mixed-layer depth, and the corollary, the impact of suspended material on light scattering and penetration and biological production. Consequently, joint assimilation of physical and ecosystem observations likely will benefit both components though the challenges involved are manifold.
1.3.5 Observing System Design and Adaptive Sampling Ocean analysis and forecasting systems are an appropriate and powerful means to assess the impact of the observing system, to identify gaps and to improve the efficiency/effectiveness of the observing system. An enhanced focus on observing system design and adaptive sampling in data assimilating systems will allow as-
18
A. Schiller
sessments of individual components of the observing system and provide scientific guidance for improved design and implementation of the ocean observing system. OSEs (Observing System Evaluation) assess the impact of existing individual components of the observing system on forecast skills, whereas OSSEs (Observing System Simulation Experiments) are tools for planning new observing systems. OSEs undertaken during GODAE demonstrate that global and regional forecast systems strongly depend on the availability of high resolution altimeter data (e.g. Pascual et€al. 2006). Significant degradation of the performance of these forecasting systems (e.g. forecast skill) and applications (e.g. offshore industry in the Gulf of Mexico) was thus observed when the number of available altimeters was reduced from three to two due to the unavailability of ENVISAT data. OSSEs in the Indian Ocean have provided an estimate of the respective contribution of Argo, XBT and moorings to the observing system in the Indian Ocean (e.g. Sakov and Oke 2008). These are extremely valuable tools to develop an improved understanding of the ocean and to help the design of global and regional observing systems. While OSEs and OSSEs provide an integrated, but methodology-dependent, performance assessment of an observational array, recently proposed approaches based on the representer matrix spectrum (e.g. Hénaff et€al. 2008) focus on the capacity of a given array to detect model errors. This can be achieved independently of any data assimilation method, e.g. from stochastic modelling, or as part of an Ensemble Kalman Filter. An evolving method for optimising observing arrays is adaptive sampling (e.g. Wilkin et€al. 2005). The key idea of adaptive sampling is that the initial estimate or observation can detect correlations in the environment, providing information about the number of future observing platforms needed or to specify the frequency and spatial distribution required for future sampling certain features in the environment (e.g. eddies, fronts etc.). Thus, adaptive sampling can save costs compared to dense, non-adaptive sampling, and, simultaneously, provide high-resolution information where needed.
1.4â•…Scientific Objectives of GODAE OceanView The GODAE OceanView Science Team (GOVST) was established in 2008, with the mission to define, monitor, and promote actions aimed at coordinating and integrating research associated with multi-scale and multidisciplinary ocean analysis and forecasting systems, thus enhancing the value of GODAE OceanView outputs for research and applications. Over the next decade, the science team will provide international coordination and leadership in: • The consolidation and improvement of global and regional analysis and physical forecasting systems. • The progressive development and scientific testing of the next generation of ocean analysis and forecasting systems, covering biogeochemical and ecosys-
1â•… Ocean Forecasting in the 21st Century
19
tems as well as physical oceanography, and extending from the open ocean into the shelf sea and coastal waters. • The exploitation of this capability in other applications (weather forecasting, seasonal and decadal prediction, climate change detection and its coastal impacts). • The assessment of the contribution of the various components of the observing system and scientific guidance for improved design and implementation of the ocean observing system. Members as representatives of national ocean forecasting systems of GODAE OceanView adhere to the same principles of free, open and timely exchange of data and products, sharing of scientific results and experience developing applications which were important factors in the success of GODAE. The societal benefits from these systems will only be realised through joint work with other teams of experts. Potential benefits include improvements in the day-to-day management of coastal waters, the management of marine ecosystems, weather prediction from hours to decades ahead, and the expected impacts of climate change on the oceans and coastal waters. The GOVST develops linkages with other groups and reports on its progresses, achievements and recommendations. As GODAE prototype systems transition to operational systems, international collaboration on product standardization and interoperability between systems must be maintained and developed. The WMO/IOC Joint Technical Commission for Oceanography and Marine Meteorology (JCOMM) provides an appropriate intergovernmental mechanism for the coordinating role and has recently established an Expert Team on Operational Oceanographic Forecasting Systems (ET-OOFS) within its Services Program Area for this purpose. GODAE OceanView informally reports to JCOMM and has strong links with JCOMM ETOOFS. GODAE OceanView coordinates the development of new capabilities, in cooperation with other relevant international research programs, through a number of task teams. The initial list of GODAE OceanView Task Teams includes: • Intercomparison and Validation Task Team: The team pursues activities developed during GODAE. It coordinates and promotes the development of scientific validation and intercomparison of operational oceanography systems. Activities include the definition of metrics to assess the quality of analyses and forecasts (e.g. forecast skills) both for physical and biogeochemical parameters and the setting up of specific global and regional intercomparison experiments. Metrics related to specific applications are also defined. The team liaises with the JCOMM ET-OOFS team for operational implementation. • Observing System Evaluation Task Team: One of the aims of GODAE OceanView is to formulate more specific requirements for observations on the basis of improved understanding of data utility. The team is jointly formed by GODAE OceanView and the GOOS Ocean Observations Panel for Climate (OOPC). Through the task team, GODAE OceanView
20
•
•
A. Schiller
and OOPC partners get organized at the international level to provide consistent and scientifically justified responses to agencies and organizations in charge of sustaining the global and regional ocean observing systems used for ocean monitoring and forecasting at short-range, seasonal and decadal time-scales. This activity requires harmonized protocols for observation impact assessment (e.g. OSEs and OSSEs), tools for routine production of appropriate diagnostics using NWP-derived methods, common sets of metrics for intercomparison of results, and objective methodologies which can be used to provide recommendations to the appropriate agencies and organizations. In the longer term consideration will need to be given to an evaluation strategy for identifying observing system requirements for different, possibly user-specific, applications. Coastal Ocean and Shelf Seas Task Team: This task team deals with scientific issues in support of multidisciplinary analysis and forecasting of the coastal transition zone and shelf/open ocean exchanges in relation with the larger-scale efforts. The specific objectives include: (1) discuss and promote the uses of GODAE OceanView products and results for coastal ocean forecasting systems and for coastal applications in a wider community; (2) discuss and foster integration of the varied routine sources of information in coastal ocean forecasting systems: large-scale forecasts, satellite observations, coastal observatories, etc.; discuss and support the development of coastal observing systems in terms of science and technology; (3) discuss the key physical and biogeochemical processes which have the greatest impact on modeling and forecasting quality and their utility for applications; this includes validation and forecast verification; (4) discuss and promote state-of-the-art methodology such as two-way coupling, unstructured-grid modeling, downscaling, data assimilation and array design. Marine Ecosystem Monitoring and Prediction Task Team: The integration of new models and assimilation components for ocean biogeochemistry and marine ecosystem monitoring and prediction will be required to bridge the gap between the current status and new applications in areas such as fisheries management, marine pollution and carbon cycle monitoring. The Task Team has been set up with the goal to define, promote and coordinate actions between developers of operational systems and ecosystem modelling experts, in tight connection with IMBER. The objectives of the task team are (1) to design appropriate ecosystem modelling and assimilation strategies that will be compatible with the functionalities of operational systems; (2) to develop numerical experiments aimed at improving, assessing and demonstrating the value of operational products for marine ecosystem monitoring and prediction; (3) to expand the concept of the “GODAE metrics” to biogeochemical variables and to coordinate intercomparison exercises across international groups to assess implementation progress and performances; (4) to identify the essential sets of physical and biogeochemical observations required to constrain the coupled models and to formulate relevant recommendations to further develop the global ocean observing system; (5) to promote and organise educational activities (summer schools, training workshops, etc.) aimed at sharing experience between young
1â•… Ocean Forecasting in the 21st Century
21
scientists, operational oceanographers and marine ecosystem experts. In addition to the link with IMBER, the task team articulates its activities with other relevant international programs such as GEOTRACES and SOLAS.
1.5â•…Summary and Conclusions Over the past 40 years, numerical modelling has developed rapidly in scope (from hydrodynamics to ecology) and resolution (from one-dimensional, 102 elements to three-dimensional, 108 elements) exploiting the contemporaneous development of computing power. Although we have made significant progress with the implementation of the global ocean observing system, concurrent development in observational capabilities has not been achieved yet in areas demanding high spatial resolution such as coastal domains (despite exciting advances in areas such as in remote sensing and sensor technologies). Nowadays, diverse applications involving ocean forecasting systems range from short term prediction of the three-dimensional circulation and density fields, waves, tides and storm surges to coupled ocean-atmosphere-land scenario forecasting of the effects of global climate change on terrestrial, fluvial and ecology over millennia. The accuracy of model simulations depends on the availability and suitability (accuracy, resolution and duration) of both observational and linked meteorological, oceanic and hydrological model data to set-up, force, and assess calculations. Modelling is at a stage where major and sustained investments are required in infrastructure and organisation: e.g. access to supercomputers, software maintenance and data exchange (Shapiro et€al. 2008). Many research approaches developed under GODAE are just at their beginning and will require ongoing international research collaboration and coordination. There are still many challenges related to the development of services and links with end users (which are beyond the scope of this chapter). On the scientific side, many of the fundamental modelling issues that were evoked in the book edited by Chassignet and Verron (1998) are still unresolved. They represent new challenges and require step changes to our current efforts. An incomplete list of scientific challenges follows: • Ocean modeling (for a more comprehensive list about ocean modelling issues see Griffies et€al. 2010): − Mesoscale eddying models can exhibit numerical diapycnal diffusion far larger than is observed. Spurious diapycnal mixing originating from numerical advection remains an issue, with consequences of variable and/or eddyresolving resolutions and dynamical meshes largely unexplored. Reducing the level of spurious diapycnal mixing in models facilitates collaborative efforts to incorporate mixing theories into simulations, which in turn helps to focus observational efforts to measure mixing and determine its impact on ocean circulation. Progress has been made to rectify this problem through improve-
22
A. Schiller
−
−
−
−
−
ments to tracer advection schemes, but further work is needed to quantify these advances. Largely unexplored areas of research involve the local scaling of viscosity and diffusivity coefficients. Lateral viscous friction remains the default approach for closing the momentum equation in ocean models. However, large levels of lateral viscous dissipation used by models do not mimic energy dissipation in the real ocean. The ocean floor should be represented continuously across finely resolve mesh regions to faithfully simulate topographically influenced flows. This property is routinely achieved with terrain following vertical coordinates, yet optimal strategies for unstructured mesh models remain under investigation. Large-scale ocean-waves-atmosphere coupling remains an area of active research. While wind-induced surface waves contribute primarily to mixing through generation of internal waves at the ocean surface, geostrophic motions may also sustain wave induced interior mixing. In addition, tidal waves can affect the whole water column. Submesoscale fronts and related instabilities are ubiquitous, and those active in the upper ocean provide a relatively rapid restratification mechanism that should be parameterized in ocean simulations, even those resolving the mesoscale. The coupling between physical, biogeochemical and ecosystem models in terms of consistency of scales, processes resolved and consistent parameterisations requires further research.
• Observing systems: − The exploration of impact of new types of observations on forecasting systems (e.g. remotely sensed sea surface salinity, high resolution wide swath altimetry) requires dedicated efforts and resources. − In collaboration with international programs such as IMBER and SOLAS research is under way on the implementation of real-time biogeochemical and ecosystem ocean observing systems, e.g. cost-effective sensor-technologies. − An enhanced focus on observing system design and its analogue of adaptive sampling will allow assessments of individual components of the observing system and provide scientific guidance for improved design and implementation of the ocean observing system. • Data assimilation: − Development of data assimilation tools such as coupled atmosphere-ocean initialisation techniques that are fit-for-purpose for a wide range of applications, including short-range, seasonal-to-decadal and climate change prediction (in collaboration with WMO programs) is work-in-progress. − Efficient data assimilation techniques for biogeochemical and ecosystem modules of ocean circulation models are being developed that are fit for operational purposes.
1â•… Ocean Forecasting in the 21st Century
23
− Another research focus is the representation of model and data errors using ensemble methods based on various forecasting systems thus delivering more accurate background error estimates. − Multi-scale data assimilation and joint estimation of interior and open boundary solutions in nested systems remain largely unresolved. • Coastal ocean: − Users increasingly demand an extension of the critical path of routinelyavailable global information (satellite and in-situ observations, nowcasts and forecasts) to coastal and littoral applications. − A prerequisite for an enhanced user uptake of coastal ocean forecasts are enhancements to existing systems and development of new coastal ocean forecasting systems that downscale (and upscale, i.e. two-way coupling) the global basin-wide model estimates as part of the local data assimilation problem, resolving the rich scale interactions, tides and high frequencies, and experimenting novel approaches such as coupled modelling and unstructured grid modeling. − These forecasting tools will need to be able to contribute to the objective design of observing systems for the coastal ocean, such as new satellite sensors, coastal observatories, etc.; use of such observations in the local forecasting system and upscaling of the information to the basin-scale systems. Consequently, ocean forecasting in the twenty-first century still faces many challenges with time scales ranging from weather to climate. It is inherently an international issue, requiring broad collaboration to span the global oceans; it is beyond the capability of any one country. Over the past decade, GODAE through its International GODAE Steering Team (IGST) has coordinated and facilitated the development of global and regional ocean forecasting systems and has made excellent progress. GODAE as an experiment has ended in 2008. The next decade will spawn new research activities in ocean forecasting under the auspices of the GODAE OceanView Science Team that will build on the success of GODAE. GODAE OceanView will promote the development of ocean modelling and assimilation in a consistent framework to optimize mutual progress and benefit. It will promote the associated utilization of improved ocean analyses and forecasts and will provide a means to assess the relative contributions of and requirements for observing systems, and their respective priorities. The GODAE OceanView programme will result in the long-term international collaboration and cooperation that is required for the next, sustained, phase of operational oceanography in the twenty-first century. The grand vision and key research challenge is to develop coupled initialisation systems of numerical weather prediction and eddy-resolving ocean models. These systems will contribute to and benefit from recent progress in Earth systems modelling. With increasing computing resources the next decade is also likely to see an even stronger emphasis on “seamless” integrations across time and space scales,
24
A. Schiller
covering global, regional and coastal/near-shore ocean prediction systems and addressing an increasing number of user applications. Acknowledgements╇ This paper was written with inputs from the former members of the GODAE International Science Team, and, more recently, the members of the GODAE OceanView Science Team and their Patrons groups. The author would like to particularly thank Pierres-Yves Le Traon, Mike Bell, Eric Dombrowsky, Kirsten Wilmer-Becker, Pierre Brasseur, Pierre De Mey, Roger Proctor, Jacques Verron, Peter Oke and John Parslow for their contributions through many discussions on issues of relevance to this paper.
References Bell MJ, Lefèbvre M, Le Traon P-Y, Smith N, Wilmer-Becker K (2009) GODAE: the global ocean data assimilation experiment. Oceanog Mag 22(3):14–21 (Special issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Berline L, Brankart JM, Brasseur P, Ourmières Y, Verron J (2007) Improving the physics of a coupled physical–biogeochemical model of the North Atlantic through data assimilation: impact on the ecosystem. J Mar Syst 64(1–4):153–172 Black A, Hall S (1865) Black’s general atlas of the world. A&C Black, Edinburgh Blower JD, Blanc F, Clancy M, Cornillon P, Donlon C, Hacker P, Haines K, Hankin SC, Loubrieu T, Pouliquen S, Price M, Pugh TF, Srinivasan A (2009) Serving GODAE data and products to the ocean community. Oceanogr Mag 22(3):70–79 (Special issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Cailleau S, Fedorenko V, Barnier B, Blayo E, Debreu L (2008) Comparison of different numerical methods used to handle the open boundary of a regional ocean circulation model of the Bay of Biscay. Ocean Model 25(1–2):1–16. doi:10.1016/j.ocemod.2008.05.009 Chassignet EP, Verron J (1998) Ocean modeling and parameterization. In: Chassignet EP, Verron J (eds) Proceedings of the NATO advanced study Institute on ocean modeling and parameterization, Kluwer Acadamic, Dordrecht, p€451. Les Houches, France, 20–30 Jan 1998 (NATO ASI Series C, 516) Clark C, In Situ Observing System Authors, Wilson S, Satellite Observing System Authors (2009) An overview of global observing systems relevant to GODAE. Oceanogr Mag 22(3):22–33 (Special issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Coriolis G (1835) Memoire sur le equations du movement relative des systems de corps. J de l’Ecole Royale Polytechnique 15:142 Cummings J, Bertino L, Brasseur P, Fukumori I, Kamachi M, Martin MJ, Mogensen K, Oke P, Testut CE, Verron J, Weaver A (2009) Ocean Data Asimilation Systems for GODAE. Oceanogr Mag 22(3):96–109 (Special issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Davidson FJM, Allen A, Brassington GB, Breivik Ø, Daniel P, Kamachi M, Sato S, King B, Lefevre F, Sutton M, Kaneko H (2009) Applications of GODAE ocean current forecasts to search and rescue and ship routing. Oceanogr Mag 22(3):176–181 (Special issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) De Mey P, Craig P, Kindle J, Ishikawa Y, Proctor R, Thompson K, Zhu J (2007) Towards the assessment and demonstration of the value of GODAE results for coastal and shelf seas and forecasting systems. GODAE White Paper, GODAE Coastal and Shelf Seas Working Group (CSSWG), 2nd ed, p€79 Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanogr
1â•… Ocean Forecasting in the 21st Century
25
Mag 22(3):80–95 (Special Issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Ekman VW (1905) On the influence of the earth’s rotation on ocean currents. Arkiv för Matematik. Astronomi och Fysik 2(11):52 Ertel H (1942) Ein neuer hydrodynamischer Erhaltungssatz. Naturwissenschaften 30:543–544 Franklin B (1786) A letter from Dr. Benjamin Franklin, to Mr. Alphonsus le Roy, member of several academies at Paris. Containing sundry maritime observations. At sea, on board the London packet, Capt. Truxton, August 1785. Transactions of the American Philosophical Society, held at Philadelphia, for Promoting Useful Knowledge II: 294–329. Includes chart and diagrams. Held by NOAA Central Library, Silver Spring, MD Freeland H et€al (2010) Argo—a decade of progress. In: Hall J, Harrison DE & Stammer D (eds) Proceedings of OceanObs’09: sustained Ocean Observations and Information for Society, vol€2, Venice, Italy, 21–25 Sept 2009. ESA Publication WPP-306 Fu LL, Cazenave A (2001) Satellite altimetry and earth sciences. A handbook of techniques and applications. Academic, San Diego Griffies S et€al (2010) Problems and prospects in large-scale ocean circulation models. In: Hall J, Harrison DE & Stammer D (eds) Proceedings of OceanObs’09: sustained Ocean observations and information for society, vol€2, Venice, Italy, 21–25 Sept 2009. ESA Publication WPP-306 Hackett B, Comerma E, Daniel P, Ichikawa H (2009) Marine oil pollution prediction. Oceanogr Mag 22(3):168–175 (Special Issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Hernandez F, Bertino L, Brassington G, Chassignet E, Cummings J, Davidson F, Drévillon M, Garric G, Kamachi M, Lellouche J-M, Mahdon R, Martin MJ, Ratsimandresy A, Regnier C (2009) Validation and intercomparison studies with GODAE. Oceanogr Mag 22(3):128–143 (Special Issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) € International GODAE Steering Team (2000) The Global Ocean Data Assimilation Experiment Strategic Plan, GODAE Report No. 6 IPCC First Assessment Report (1990) Scientific Assessment of Climate Change—Report of Working Group I, 1, Houghton JT, Jenkins GJ, Ephraums JJ (eds) Cambridge University Press, UK, p€365 Jacobs GA, Woodham R, Jourdan D, Braithwaite J (2009) GODAWE applications useful to Navies throughout the World. Oceanogr Mag 22(3):182–189 (Special Issue on the Revolution of Global Ocean Forecasting—GODAE: 10 years of achievement) Le Hénaff M, De Mey P, Marsaleix P (2008) Assessment of observational networks with the Representer Matrix Spectra method—application to a 3-D coastal model of the Bay of Biscay. Ocean Dyn 59(1):3–20 (Special Issue, 2007 GODAE Coastal and Shelf Seas Workshop, Liverpool, UK) Luong B, Blum J, Verron J (1998) A variational method for the resolution of a data assimilation problem in oceanography. Inverse Probl 14:979–997 Malone T, DiGiacomo P, Muelbert J, Parslow J, Sweijd N, Yanagi T, Yap H, Blanke B (2010) Building a global system of systems for the coastal ocean. In: Hall J, Harrison DE, Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€2, Venice, Italy, 21–25 Sept 2009. ESA Publication WPP-306 Maury MF (1855) Physical geography of the sea. Harper & Brothers, New York Murray JS, Hjort J (1912) The depths of the ocean: a general account of the modern science of oceanography based largely on the scientific researches of The Norwegian Steamer Michael Sars in The North Atlantic. Macmillan, London Pascual A, Faugere Y, Larnicol G, Le Traon P-Y (2006) Improved description of the ocean mesoscale by combining four satellite altimeters. Geophys Res Lett 33:L02611. doi:10.1029/2005GL024633 Rennell J, Purdy J (1832) An investigation of the currents of the Atlantic Ocean, and of those which prevail between the Indian Ocean and the Atlantic. In: Purdy J (ed). Nabu Press, London
26
A. Schiller
Robert C, Blayo E, Verron J (2006) Comparison of reduced-order sequential, variational and hybrid data assimilation methods in the context of a tropical Pacific ocean model. Ocean Dyn 56(5–6):624–633 Sakov P, Oke PR (2008) Objective array design: application to the tropical Indian Ocean. J Atmos Ocean Technol 25:794–807 Semtner AJ, Chervin RM (1992) Ocean general circulation from a global eddy resolving model. J Geophys Res 97:5493–5550 Shapiro M, Shukla J, Hoskins B, Church J, Trenberth K, Béland M, Brasseur G, Wallace M, McBean G, Caughey J, Rogers D, Brunet G, Barrie L, Henderson-Sellers A, Burridge D, Nakazawa T, Miller M, Bougeault P, Anthes R, Toth Z, Palmer T (2008) The socioeconomic and environmental benefits of a revolution in weather, climate and earth-system prediction: a weather, climate and earth-system prediction project for the 21st century. Group on earth observations, Tudor Rose, Geneva, pp€136–138 Siedler G, Church J, Gould J (eds) (2001) Ocean circulation and climate: observing and modelling the Global Ocean. Academic Press, San Diego Smith N, Lefebvre M (1997) Monitoring the oceans in the 2000s: an integrated approach The Global Ocean Data Assimilation Experiment (GODAE). International Symposium, Biarritz Sverdrup HU, Johnson MW, Fleming RH (1942) The oceans: their physics, chemistry, and general biology. Prentice-Hall, Englewood Cliffs, p€1087 Sverdrup HU (1947) Wind-driven currents in a Baroclinic Ocean; with application to the equatorial currents of the Eastern Pacific. Proc Natl Acad Sci U S A 33:318–326 Thompson Sir Wyville, Sir John Murray, George S. Nares, and Frank Tourle Thompson (1880– 1895) Report on the scientific results of the voyage of H.M.S. Challenger during the years 1873–76 under the command of Captain George S. Nares, R.N., F.R.S. and the late Captain Frank Tourle Thomson, R.N./prepared under the superintendence of the late Sir C. Wyville Thompson, and now of John Murray; published by order of Her majesty’s Government. H.M. Stationery Office Wilkin JL, Arango HG, Haidvogel DB, Lichtenwalner CS, Glenn SM, Hedstrom KS (2005) A regional ocean modeling system for the long-term ecosystem observatory. J Geophys Res 110, C06S91. doi:10.1029/2003JC002218 Wilson S, et€al (2010) Ocean surface topography Constellation: the next 15 years in satellite altimetry. In: Hall J, Harrison DE, Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€ 2, Venice, 21–25 Sept 2009. ESA Publication WPP-306 Wunsch (1978) The North Atlantic general circulation west of 50°W determined by inverse methods. Rev GeophysSpace Phys 6(4):583–620 Zebiak SE, Cane MA (1987) A model of El Nino—southern oscillation. Mon Weath Rev 115:2262– 2278
Part II
Oceanographic Observing System
Chapter 2
Satellites and Operational Oceanography Pierre-Yves Le Traon
Abstract╇ The chapter starts with an overview of satellite oceanography, its role and use for operational oceanography. Main principles of satellite oceanography techniques are then summarized. We then describe key techniques of radar altimetry, sea surface temperature, ocean colour satellite measurements. This includes measurement principles, data processing issues and the use of these data for operational oceanography. SAR, scatterometry, sea ice and sea surface salinity measurements are also briefly described. Main prospects are given in the conclusion.
2.1â•…Introduction There are very strong links between satellite oceanography and operational oceanography. The development of operational oceanography has been mainly driven by the development of satellite oceanography capabilities. The ability to observe the global ocean in near real time at high space and time resolution is indeed a prerequisite to the development of global operational oceanography and its applications. The first ocean parameter to be globally monitored from space was the sea surface temperature on board meteorological satellites in the late 1970s. It is, however, the advent of satellite altimetry in the late 1980s that led the development of ocean data assimilation and global operational oceanography. In addition to providing all weather observations, sea level from satellite altimetry is an integral of the ocean interior and provides a strong constraint on the 4D ocean state estimation. The satellite altimetry community was also keen to develop further the use of altimetry and this required an integrated approach merging satellite and in-situ observations with models. GODAE demonstration was thus phased with the Jason-1 and ENVISAT altimeter missions (Smith and Lefebvre 1997). Satellite oceanography is now a major component of operational oceanography. Data are usually assimilated in ocean models but they can also be used directly for P.-Y. Le Traon () IFREMER, Centre de Brest, Technopôle Brest Iroise BP70, 29280 Plouzané, France e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_2, ©Â€Springer Science+Business Media B.V. 2011
29
30
P.-Y. Le Traon
applications. An overview of satellite oceanography will be given here focusing on the most relevant issues for operational oceanography. The chapter is organized a follows. Section€2.2 provides an overview of satellite oceanography, its role and use for operational oceanography. Main operational oceanography requirements are summarized. The complementary role of in-situ observations is also emphasized. Main principles of satellite oceanography and general data processing issues are described in Sect.€2.3. We then detail key techniques of radar altimetry and gravimetry, sea surface temperature, ocean colour satellite measurements in Sects.€2.4, 2.5 and 2.6. This includes measurement principles, data processing issues and the use of these data for operational oceanography. SAR, scatterometry, sea ice and the new sea surface salinity measurements are briefly described in Sect.€2.7. Main prospects are given in the conclusion.
2.2╅Role of Satellites for Operational Oceanography 2.2.1 T he Global Ocean Observing System and Operational Oceanography Operational oceanography critically depends on the near real time availability of high quality in-situ and remote sensing data with a sufficiently dense space and time sampling. The quantity, quality and availability of data sets directly impact the quality of ocean analyses and forecasts and associated services. Observations are required to constrain ocean models through data assimilation and also to validate them. Products derived from the data themselves can also be directly used for applications (e.g. in the case of a parameter observed from space at high resolution). This requires an adequate and sustained global ocean observing system. Climate and operational oceanography applications share the same backbone system (GOOS, GCOS, JCOMM). Operational oceanography has, however, specific requirements for high resolution measurements. Operational oceanography requirements have been presented in the GODAE strategic plan and in Le Traon et€ al. (2001). They have been refined and detailed in Clark and Wilson (2009) and Oke et€al. (2009).
2.2.2 The Unique Contribution of Satellite Observations Satellites provide long-term, continuous, global, high space and time resolution data for key ocean parameters: sea level and ocean circulation, sea surface temperature (SST), ocean colour, sea ice, waves and winds. These are the core variables observations required to constrain global, regional and coastal ocean monitoring
2â•… Satellites and Operational Oceanography
31
and forecasting systems. They are also needed to validate them. Only satellite measurements can, in particular, provide observations at high space and time resolution to partly resolve the mesoscale variability and coastal variability. Satellite data can also be directly used for applications (e.g. SAR for sea ice and oil pollution monitoring, ocean colour for water quality monitoring). Sea surface salinity is a new and important parameter that could be operationally monitored from space; the demonstration is underway with the European Space Agency SMOS mission (and later on with the NASA/CONAE Aquarius mission).
2.2.3 Main Requirements The main requirement for operational oceanography is to have a long-term, continuous and near real time access to the core operational satellite observations of sea level, SST, ocean colour, sea ice, wave and winds. For a given parameter, this generally requires several satellites flying simultaneously to get sufficient space and time resolution. The main requirements can be summarized as follows (e.g. Le Traon et€al. 2006; Clark and Wilson 2009): • In addition to meteorological satellites, a high precision (AATSR-class) SST satellite is needed to give the highest absolute SST accuracy. A microwave mission is also needed to provide an all weather global coverage. • At least three or four altimeters are required to observe the mesoscale circulation. This is also useful for significant wave height measurements. A long-term series of a high accuracy altimeter system (Jason satellites) is needed to serve a reference for the other missions and for the monitoring of climate signals. • Ocean colour is increasingly important, in particular, in coastal areas. At least two satellites are required. • Two scatterometers are required to globally monitor at high spatial resolution the wind field. • Two SAR satellites are required for waves, sea-ice characteristics and oil slick monitoring. These minimum requirements have been only partly met over the past ten years. Long-term continuity and transition from research to operational mode remains a major challenge (e.g. Clark and Wilson 2009). Specific requirements for altimetry, SST and ocean colour are discussed in the following sections.
2.2.4 Role of In-Situ Data Satellite observations need to be complemented by in-situ observations. First, in-situ data are needed to calibrate satellite observations. Most algorithms used
32
P.-Y. Le Traon
to transform satellite observations (e.g brightness temperatures) into geophysical quantities are partly based on in-situ/satellite match up data bases. In-situ data are then used to validate satellite observations and to monitor the long term stability of satellite observations. The stability of the different altimeter missions is, for example, commonly assessed by comparing the altimeter sea surface height measurements with those from tide gauges (Mitchum 2000). Other examples includes the validation of altimeter velocity products with drifter data (e.g. Pascual et€al. 2009), the systematic validation of satellite SST with in-situ SST from drifting buoys and the use of dedicated ship mounted radiometers to quantify the accuracy of satellite SST (Donlon et€al. 2008). The comparison of in-situ and satellite data can also provide useful indication on the quality of in-situ data (e.g. Guinehut et€al. 2008). The comparison of in-situ and satellite data is also useful to check the consistency between the different data sets before they are assimilated in an ocean model (e.g. Guinehut et€al. 2006). In-situ data are also (and mainly) mandatory to complement satellite observations and to provide measurements of the ocean interior. Only the joint use of high resolution satellite data with precise (but sparse) in-situ observations of the ocean interior has the potential to provide a high resolution description and forecast of the ocean state.
2.2.5 Data Processing Issues Satellite data processing includes different steps: level 0 and level 1 (from telemetry to calibrated sensor measurements), level 2 (from sensor measurements to geophysical variables), level 3 (space/time composites of level 2 data) and level 4 (merging of different sensors, data assimilation). Processing from level 0 to level 2 is generally carried out as part of the satellite ground segments. Assembly of level 2 data from different sensors, intercalibration of level 2 products, and higher level data processing is usually done by specific data processing centers or thematic assembly centers. The role of these data processing centers is to provide modelling and data assimilation centers with the real time and delayed mode data sets required for validation and data assimilation. This also includes uncertainty estimates that are critical to an effective use of data in modelling and data assimilation systems. Links with data assimilation centers are needed, in particular, to organize feedback on the quality control performed at the level of data assimilation centers (e.g. comparing an observation with a model forecast), on the impact of data sets and data products in the assimilation systems and on new or future requirements. High level data products (level 3 and 4) are also needed for applications (e.g. a merged altimeter surface current product for marine safety or offshore applications) and can be used to validate data assimilation systems (e.g. statistical versus dynamical interpolation) and complement products derived through modelling and data as-
2â•… Satellites and Operational Oceanography
33
similation systems. It is important, however, to be fully aware of limitations of high level satellite products (e.g. gridded SST or sea level data sets) when using them.
2.2.6 Use of Satellite Data for Assimilation into Ocean Models This is discussed at length in other chapters. Three important issues are emphasized here: 1. There can be large differences in data quality between real time and delayed mode (reprocessed) data sets. Depending on applications, trade-offs between time delay and accuracy often need to be considered. 2. Error characterisation is mandatory for data assimilation and a proper characterisation of error covariance can be quite complex for satellite observations. Data error covariance should always be tested and checked as part of the data assimilation systems. 3. It is much better in theory and for advanced assimilation schemes to use raw data (level 2 or in some cases level 1 when the model can provide data needed for level 1 processing). The data error structure is generally more easily defined. The model and the assimilation scheme should also do a better high level processing (e.g. a model forecast should provide a better background than climatology or persistence). However, in practice, this is not always true. Some data high level processing (e.g. correcting biases or large scale errors, intercalibration) is often needed as it cannot be easily done within the assimilation systems.
2.3╅Overview of Satellite Oceanography Techniques 2.3.1 Passive/Active Techniques and Choice of Frequencies There are two main types of satellite techniques to observe the ocean1. Passive techniques measure the natural radiation emitted from the sea or from reflected solar radiation. Active or radar techniques send a signal and measure the signal received after its reflection at the sea surface. In both cases, the propagation of the signal through the atmosphere, the emission from the atmosphere itself must be taken into account to isolate the sea surface signal. The intensity and frequency distribution of the radiation that is emitted or reflected from the ocean surface allows the inference of its properties. The polarization of the radiation is also often used in microwave remote sensing. 1╇ Gravimetry satellites (e.g. GRACE, GOCE) which measure the earth gravity field and its variations do not enter into these two categories.
34
P.-Y. Le Traon
Satellite systems operate at different frequencies depending on the signal to be derived. Visible (400–700€nm) and infra-red (0.7–20€μm) frequencies are used for ocean colour and SST measurements. Passive (radiometry) microwave systems (1– 30€cm) are used for SST in cloud situations, wind, sea ice and sea surface salinity retrievals. Radars operate in the microwave bands and provide measurements of sea surface height, wind speed and direction, wave spectra, sea ice cover and types and surface roughness. Radar pulses are emitted obliquely (15°–60°) (SAR, scatterometer) or vertically (altimetry). The choice of frequencies is limited by other usages (e.g. radio, cellular phones, military and civilian radars, satellite communications). Those are particularly important at microwave frequencies in the range 1–10€GHz which puts strong pressures on the frequencies used for earth remote sensing. The atmosphere also greatly affects the transmission of radiation between the ocean surface and the satellite sensors. The presence of fixed concentrations of atmospheric gases (e.g. O2, CO2, O3) and of water vapor means that only a limited number of windows exist in the visible, infra-red and microwave for ocean remote sensing. Even at these frequencies, the propagation effects through the atmosphere must be taken into account and corrected for. Propagation effects through the ionosphere must also be taken into account. Clouds are a strong limitation of visible and infrared measurements. There are also technological constraints for the choice of frequencies. The resolution of a given sensor is generally related to the ratio between the observed wavelength (↜λ) and the antenna diameter (D). For antenna diameters of a few meters, typical resolution around 1€ GHz (wavelength of 30€ cm) is about 100€ km while at 30€GHz (wavelength of 1€cm), resolution is about 10€km. Radar altimeters use pulse limited techniques (that are much less sensitive to mispointing errors). Their footprint size is related to the pulse duration and is much smaller than for a beam limited sensor. Synthetic Aperture Radar uses the motion of the satellite to generate very long antenna (e.g. 20€km for ASAR) and thus to provide very high resolution measurements (up to a few meters).
2.3.2 Satellite Orbits and Measurement Characteristics Orbits for ocean satellites are geostationary, polar or inclined orbits. A geostationary orbit is one in which the satellite is always in the same position with respect to the rotating Earth. The satellite orbits at an elevation of approximately 36,000€km because that produces an orbital period equal to the period of rotation of the Earth. By orbiting at the same rate, in the same direction as Earth, the satellite appears stationary. Geostationary satellites provide a large field of view (up to 120°) at very high frequency enabling coverage of weather events. Because of the high altitude, spatial resolution is of a few km while it is of 1€km or less for polar orbiting satellites. Because a geostationary orbit must be in the same plane as the Earth’s rotation, that is the equatorial plane, it provides distorted images of the polar regions. Five or six geostationary meteorological satellites can provide a global coverage of the earth (for latitudes below 60°).
2â•… Satellites and Operational Oceanography
35
Polar-orbiting satellites provide a more global view of Earth by passing from pole to pole, observing a different portion of the Earth with each orbit due to the Earth’s own rotation. Orbiting at an altitude of 700–800€km these satellites have an orbital period of approximately 90€ min. These satellites usually operate in a sun-synchronous orbit. The satellite passes the equator and any given latitude at the same local solar time each day. Inclined orbits have an inclination between 0° (equatorial orbit) and 90° (polar orbit). They are used, in particular, to observe tropical regions (e.g. TMI on TRMM mission). High accuracy altimeter satellites such as TOPEX/Poseidon and Jason use higher altitude and non synchronous orbits to reduce atmospheric drag and (mainly) to avoid aliasing of the main tidal signals. Depending on instrument types (along-track, imaging or swath), frequencies and antennas (see above), the sampling pattern of a given satellite will be different. In addition, in the visible and infrared frequencies, cloud cover can strongly reduce the effective sampling.
2.3.3 Radiation Laws and Emissivity 2.3.3.1â•…Radiation from a Blackbody Planck’s law describes the rate of energy emitted by a blackbody as a function of frequency or wavelength. A blackbody absorbs all the radiation it receives and emits radiation at a maximum rate for its given temperature. Planck’s law gives the intensity of radiation Lλ emitted by unit surface area into a fixed direction (solid angle) from the blackbody as a function of wavelength (or frequency). The Planck Law can be expressed through the following equation: Lλ = 2hc2 /λ5 [exp(hc/λkT ) − 1]
where T is temperature, c the speed of light (2.99â•›·â•›10−8€m€s−1), h the Planck’s constant (6.63â•›·â•›10−34€ Jâ•›⋅â•›s), k the Boltzmann’s constant (1.38â•›·â•›10−23 J€ K−1) and Lλ the spectral radiance per unit of wavelength and solid angle in W€m−3€sr−1. The Planck law gives a distribution that peaks at a certain wavelength; the peak shifts to shorter wavelengths for higher temperatures. The Wien displacement law and the Stefan-Boltzmann law are two other useful radiation laws that can be derived from the Planck law. The Wien law gives the wavelength of the peak of the radiation distribution (↜λmaxâ•›= 3€╛·â•›107/T) while the Stefan-Boltzmann law gives the total energy E being emitted at all wavelengths by the blackbody (Eâ•›=╛╛·â•›T4). Thus, the Wien law explains the shift of the peak to shorter wavelengths as the temperature increases, while the Stefan-Boltzmann law explains the growth in the height of the curve as the temperature increases. Notice that this growth is very abrupt, since it varies as the fourth power of the temperature. The Rayleigh-Jeans approximation (↜Lλâ•›=â•›2kcT/λ4) holds for wavelengths much greater than the wavelength of the peak in the black body radiation form. This approximation is valid over the microwave band.
36
P.-Y. Le Traon
2.3.3.2â•…Graybodies and Emissivity Most bodies radiate less efficiently than a blackbody. The emissivity e is defined as the ratio of graybody radiance to the blackbody. It has a non dimensional unit and is comprised between 0 and 1. The emissivity generally depends on wavelength (↜λ) and polarization and has a directional dependence. e can be considered as a physical surface property and is a key quantity for ocean remote sensing. A blackbody absorbs all the energy it receives. A graybody absorbs only part of it and the remaining part is reflected and/or transmitted. The absorptivity is equal to the emissivity as a surface in equilibrium must absorb and emit energy at the same rate (Kirchoff’s law). Similarly the reflectivity is equal to 1â•›−â•›e. The brightness temperature (BT) is defined as BTâ•›=â•›eâ•›·â•›T where T is the (physical) temperature. In the microwave band, it is proportional to the radiation Lλ. 2.3.3.3â•…Retrieval of Geophysical Parameters for Microwave Radiometers The brightness temperature is an integrated measurement that includes all surface and atmosphere emitted power. Depending on frequency, it is more sensitive to a given parameter. Physical retrieval algorithms for geophysical parameters, such as the sea surface temperature, sea surface wind speed, sea ice or sea surface salinity are derived from a radiative transfer model (RTM), which computes the brightness temperatures that are measured by the satellite as a function of these variables. The RTM is based on a model for the sea surface emissivity and a model of microwave absorption in the Earth’s atmosphere. The ocean sea surface emissivity (or reflectivity see above) depends on the dielectric constant ε (which is a function of frequency, water temperature and salinity), small scale sea surface roughness, foam as well as viewing geometry and polarization. The retrieval of a given parameter is possible through the inversion of a set of brightness temperatures measured at different frequencies and/ or at different incidence angles. Inversion methods minimize the difference between measured and simulated (through a RTM) brightness temperatures. Statistical or empirical inversions are also often used given uncertainties in RTMs. They use a regression formalism (e.g. parametric, neural network) to find the best relation between brightness temperatures and the geophysical parameter to be retrieved.
2.4╅Altimetry 2.4.1 Overview Satellite altimetry is the most essential observing system required for global operational oceanography. It provides global, real time, all-weather sea level measurements (SSH) with high space and time resolution. Sea level is directly related to ocean circulation through the geostrophic approximation (see Sect.€2.4.5). Sea level is also an integral of the ocean interior and is a strong constraint for inferring the
2â•… Satellites and Operational Oceanography
37
4D ocean circulation through data assimilation. Altimeters also measure significant wave height, which is essential for operational wave forecasting. High resolution from multiple altimeters is required to adequately represent ocean eddies and associated currents (the “ocean weather”) in models. Only altimetry can constrain the 4D mesoscale circulation in ocean models which is required for most operational oceanography applications.
2.4.2 Measurement Principles An altimeter is active radar that sends a microwave pulse towards the ocean surface. Precise clock on board measures the return time of the pulse from which the distance or range (d) between the satellite and the sea surface is derived (dâ•›=â•›t/2c). The range precision is of a few centimeters for a distance of 800–1,300€km. The altimeter also measures the backscatter power (related to surface roughness and wind) and significant wave height. An altimeter mission generally includes a bifrequency altimeter radar (usually in Ku and C or S Band) (for ionospheric corrections), a microwave radiometer (for water vapor correction) and a tracking system for precise orbit determination (Laser, GPS, Doris) that provides the orbit altitude relative to a given earth ellipsoid. Altimeter missions provide along-track measurements every 7€km along repetitive tracks (e.g. every 10 days for the TOPEX/Poseidon and Jason series and 35 days for ERS and ENVISAT). The distance between tracks is inversely proportional to the repeat time period (e.g. about 315€km at the equator for TOPEX/Poseidon and 90€km for ERS/ENVISAT). The main measurement for an altimeter radar is the sea surface height (SSH) relative to a given earth ellipsoid. The SSH is derived as the difference between the orbit altitude and the range measurement. SSH precision depends on orbit and range errors. Altimeter range measurements are affected by a large number of errors (propagation effects in the troposphere and ionosphere, electromagnetic bias, errors due to inaccurate ocean and terrestrial tide models, inverse barometer effect, residual geoid errors). Some of these errors can be corrected with dedicated instrumentation (e.g. dual frequency altimeter, radiometer). For a comprehensive description of altimeter measurement principles, the reader is referred to Chelton et€al. (2001).
2.4.3 Geoid and Repeat-Track Analysis The sea surface height SSH(x,t) measured by altimetry can be described by: SSH(x, t) = N(x) + η(x, t) + ε(x, t)
N is the geoid, the dynamic topography and are measurement errors. The quantity of interest for oceanographer is the dynamic topography (see next section).
38
P.-Y. Le Traon
Present geoids are not generally accurate enough to estimate globally the absolute dynamic topography except at long wavelengths. The variable part of the dynamic topography ′(↜╛−â•›<>) (or SLA for sea level anomaly) is, however, easily extracted using the so-called repeat track method. For a given track, ′ is obtained by removing the mean profile over several cycles, which contains the geoid N and the mean dynamic topography <>: SLA(x, t) = SSH(x, t) − < SSH(x) > t = η(x, t) − < η(x) > t + ε (x, t)
To get the absolute signal, one has thus to use a climatology or to use existing geoids together with altimeter Mean Sea Surface (MSS) (or both). One can also rely on a model mean. Gravimetric missions (CHAMP, GRACE) are now providing much more accurate geoids, GOCE should almost “solve” the problem. Even with GOCE, however, repeat-track analysis will still be needed because of the small scales of geoid (below 50–100€km) will not be precisely known. GOCE will be used with an altimetric MSS to derive <>t that can then be added to ′.
2.4.4 High Level Data Processing Issues and Products The SSALTO/DUACS system is the main multi-mission altimeter data center used today for operational oceanography. It aims to provide directly usable, high quality near real time and delayed mode (for reanalyses and research users) altimeter products to the main operational oceanography and climate centers in Europe and worldwide. Main processing steps are product homogenization, data editing, orbit error correction, reduction of long wavelength errors, production of along track and maps of sea level anomalies. Major progress has been made in higher level processing issues such as orbit error reduction (e.g. Le Traon and Ogor 1998), intercalibration and merging of altimeter missions (e.g. Le Traon et€al. 1998; Ducet et€al. 2000; Pascual et€al. 2006). The SSALTO/DUACS weekly production moved to a daily production in 2007 to improve timeliness of data sets and products. A new real time product was also developed for specific real time mesoscale applications. The mean dynamic topography (MDT) is an essential reference surface for altimetry. Added to the sea level anomalies, it provides the absolute sea level and ocean circulation (see previous section). After a preliminary MDT computed in 2003, a new MDT, called RIO-05, was computed in 2005. It is based on the combination of GRACE data, drifting buoy velocities, in-situ T,S profiles and altimeter measurements. The MDT was tested and is now used by several GODAE modelling and forecasting centers. It has a positive impact on the ocean analysis quality and forecast skill. An updated version was recently delivered (CNES-CLS09). Major improvement is expected soon with the use of data from the GOCE mission.
2â•… Satellites and Operational Oceanography
39
2.4.5 Sea Level Measurement Content Satellite altimetry provides measurements of the dynamic topography (i.e. sea level relative to the geoid). Assuming geostrophy and hydrostatic balance, one has:
1 ∂P fv = ρ0 ∂x 1 ∂P −f u = ρ0 ∂y
(2.1) f = 2 sin θ
(2.2)
∂p = −ρg ∂z
(2.3)
with u, v zonal and meridional currents, P pressure and f the Coriolis parameter. At the surface Pâ•›=╛╛·â•›gâ•›·â•› (↜╛=â•›sea surface topography relative to the geoid), thus there is a direct relationship between the dynamic topography and the surface (geostrophic) current: ∂η fv = g ∂x (2.4) ∂η −f u = g ∂y Taking the derivative of (2.3), one gets the thermal wind equation. It means that density horizontal variations are associated with vertical shear (baroclinic motions):
f
∂v −g ∂ρ = ∂z ρ0 ∂x
(2.5)
The integration of (2.5) from z0 to z1 yields: g z1 1 ∂ρ v(z) = v(z0 ) − dz f z0 ρ0 ∂x
g ∂ηs or v(z) = v(z0 ) + with ηs (z0 , z1 ) = − f ∂x
(2.6)
z1
z0
ρ dz ρ0
s is the steric height. s is generally defined as s (bottom, surface). At the surface, one has: g ∂ηs g ∂η v(z0 ) + f ∂x = f ∂x Pz0 ⇒ η = ηs + ρ0 g 1 ∂Pz0 with v(z0 ) = f ρ0 ∂x
(2.7)
(2.8)
P.-Y. Le Traon
40
The dynamic topography (measured by altimetry) is thus the sum of a steric height term (integral of density anomalies which is generally referred to the baroclinic component) and a bottom pressure term (barotropic component). Sea level is thus more than a « surface » measurement. It corresponds to a signal over the full depth of the ocean and provides a strong constraint for inferring (together with in-situ measurements) the 4D ocean structure through data assimilation.
2.4.6 Operational Oceanography Requirements Le Traon et€al. (2006) have defined the main priorities for altimeter missions in the context of the European GMES (Global Monitoring for Environment and Security) Marine Core Service. Their Tables€2.1 and 2.2 give the requirements for different applications of altimetry and characteristics of altimeter missions. The main operational oceanography requirements for satellite altimetry can be summarized as follows: 1. Need to maintain a long time series of a high accuracy altimeter system (Jason series) to serve a reference mission and for climate applications. It requires one class A altimeter with an overlap between successive missions of at least 6 months. 2. The main requirement for medium to high resolution altimetry would be to fly three class B altimeters in addition to the Jason series (class A). Most operational oceanography applications (e.g. marine security, pollution monitoring) require high resolution surface currents that cannot be adequately reproduced without a high resolution altimeter system. Recent studies (e.g. Pascual et€al. 2006) show Table 2.1↜渀 User requirements for different applications of altimetry Application area Accuracya Spatial resolution (cm) (km) 1.╇Climate applications and 1 300–500 reference mission 2.╇Ocean nowcasting/forecasting for 3 50–100 mesoscale applications 3 10 3.╇ Coastal/local a For the given resolution b Limited by feasibility Table 2.2↜渀 Altimeter mission characteristics Class Orbit Mission characteristics A
Non-sun synchronous
B
Polar
High accuracy for climate applications and to reference other missions Medium-class accuracy
Revisit time (days) 10–20
Priority
7–15
High
1
Lowb
High
Revisit interval (days) 10–20
Track separation at the equator (km) 150–300
20–35
80–150
2â•… Satellites and Operational Oceanography
41
that, at least three, but preferably four, altimeter missions are needed for monitoring the mesoscale circulation. This is particularly needed for real time nowcasting and forecasting. Pascual et€al. (2009) showed that four altimeters in real time provide similar results as two altimeters in delayed mode. Such a scenario would also provide an improved operational reliability. Moreover, it would enhance the spatial and temporal sampling for monitoring and forecasting significant wave height. In parallel, there is a need to develop and test innovative instrumentation (e.g. wide swath altimetry with the NASA SWOT mission) to better answer existing and future operational oceanography requirements for high to very high resolution (e.g. mesoscale/submesoscale and coastal dynamics). There is also a need to improve nadir altimetry technology (resolution, noise) and to develop smaller and cheaper instruments that could be embarked on a constellation of small satellites. The use of the Ka band (35€Ghz) allows, in particular, a major reduction in the size and weight of the altimeter. It will be tested for the first time with the CNES/ISRO SARAL satellite scheduled for launch in late 2011.
2.5╅Sea Surface Temperature 2.5.1 S ea Surface Temperature Measurements and Operational Oceanography Sea surface temperature (SST) is a key variable for operational oceanography and for assimilation into ocean dynamical models. SST is strongly related to air-sea interaction processes and provides a means to correct for errors in forcing fields (heat fluxes, wind). It also characterizes the mesoscale variability of the upper ocean (eddies, frontal structures) at very high resolution (a few km). SST data are often directly used for operational oceanography applications. They provide useful indices (e.g. climate changes, upwelling, thresholds). SST data can also be used to derive high resolution velocity fields (e.g. Bowen et€al. 2002). Accurate, stable, well resolved maps of SST are essential for climate monitoring and climate change detection. They are also central for Numerical Weather Prediction for which the role of high resolution SST measurements has been recently evidenced (e.g. Chelton 2005).
2.5.2 Measurement Principles Infrared radiometers operate at wavebands around 3.7, 10.5 and 11.5€ μm where the atmosphere is almost transparent. The brightness temperature measured from infrared radiometers differs from the actual temperature of the observed surface because of non-unit emissivity and the effect of the atmosphere. Emissivity at IR
42
P.-Y. Le Traon
frequencies is between 0.98 and 0.99 (close to a black body). Atmospheric correction is based on multispectral approach, when the differences between brightness temperatures measured at different wavelengths are used to estimate the contribution of the atmosphere to the signal. At 10€μm, the solar irradiance reaching the top of the atmosphere is about 1/300 of the sea surface emittance. At 3.7€μm, the incoming solar irradiance is the same order as the surface emittance. As a result, this wavelength can be used during nighttime only. Different algorithms are thus used for nighttime and daytime. There is no IR way of measuring SST below cloud. The first priority is thus to detect cloud through a variety of methods. For cloud detection, the thermal and near-infrared waveband thresholds are used, as well as different spatial coherency tests. Consequences of poor cloud detection are low biases in SST climatic averages and “false hits” of cloud that can hide frontal and other dynamical structures. Geostationary infra-red sensors can see whenever the cloud breaks. Microwave sensors operate at several frequencies. Retrieval of SST is done at 7 and/or 11€GHz. Higher frequency channels (19–37€GHz) are used to precisely estimate the attenuation due to oxygen, water vapor, and clouds. The polarization ratio (horizontal versus vertical) of the measurements is used to correct for sea surface roughness effects. The great advantage of microwave measurements compared to infra-red ones is that SST can be retrieved even through non-precipitating clouds, which is very beneficial in terms of geographical coverage.
2.5.3 SST Infra-Red and Microwave Sensors Infra-red radiometers such as the Advanced Very High Resolution Radiometer (AVHRR) on board operational meteorological polar orbiting satellites offer a good horizontal resolution (1€km) and potentially a global coverage, with the important exception of cloudy areas. However, their accuracy (0.4–0.5€ K derived from the difference between collocated satellite and buoy measurements) is limited by the radiometric quality of the AVHRR instrument and the correction of atmospheric effects. Geostationary satellites (e.g. GOES and MSG series) are carrying radiometers with similar infrared window channels as the AVHRR instrument. Their horizontal resolution is coarser (3–5€km), but their great contribution comes from their high temporal sampling. Pre-operational demonstrators for advanced measurement of SST suitable for climate studies include the Along Track Scanning Radiometer ((A) ATSR) series of instruments that have improved on board calibration, and make use of dual views at nadir and 55° incidence angle. The along track scanning measurement provides an improved atmospheric correction leading to an accuracy of better than 0.2€K (O’Carroll et€al. 2008). The main drawback of these instruments is their limited coverage, due to a much narrower swath than the AVHRR instruments. Several microwave radiometers have also been developed and flown over the last 10 years (e.g. AMSR, TMI). The horizontal resolution of these products is around 25€km and their accuracy around 0.6–0.7€K.
2â•… Satellites and Operational Oceanography
43
2.5.4 Key Developments in SST Data Processing During the past ten years, a concerted effort to understand satellite and in situ SST observations has taken place leading to a revolution in the way we approach the provision of SST data to the user community. GODAE, recognizing the importance of high resolution SST data sets for ocean forecasting, initiated the GODAE High Resolution SST Pilot Project (GHRSST-PP) to capitalize on these developments and develop a set of dedicated products and services. There have been key developments in data processing of SST data sets over the last 10 years. As a result, new or improved products are now available. A full description of the GHRSST-PP is provided in Donlon et€ al. (2009). Data processing issues are summarized in Le Traon et€al. (2009). A satellite measures the so-called skin temperature, i.e. at a depth from a few tens of microns (infra-red) up to a few mm only (microwave). Diurnal warming changes the SST over a layer of 1–10€m. The effect can be particularly large in regions of low wind speed and high solar radiation. GHRSST has defined the foundation SST as the temperature of the water column free of diurnal temperature variability. A key issue in SST data processing is to correct satellite SST measurements for skin and diurnal warming effects to provide precise estimations of the foundation SST. Night and day SST data from different satellites can then be merged through an optimal interpolation or a data assimilation system. Several new analyzed high resolution SST products have been produced, in particular, in the framework of GHRSST-PP. These high resolution data sets are estimated by optimal interpolation methods merging SST satellite measurements from both infrared and microwave sensors. The pre-processing consists mainly in a screening and quality control of the retrieved observations from each single datasets and in constructing a coherent merged multi-sensor set of the most relevant and accurate observations (level 3). The merging of these observations requires a method for bias estimate and correction (relative to a chosen reference, currently AATSR). The gap free SST foundation field is finally computed from the merged set of selected observations using an objective analysis method. The guess is either climatology or a previous map.
2.5.5 Operational Oceanography Requirements Table€2.3 from Le Traon et€al. (2006) summarises weather, climate and operational oceanography requirements for sea surface temperature. In order to meet the key requirements for SST no single sensor is adequate. To remedy this, GHRSST-PP has established an internationally accepted approach to blending SST data from different sources that complement each other (see previous section). For this to work effectively, there must be an assemblage of four distinct types of satellite SST missions in place at any time, as defined in Table€2.4 (from Le Traon et€al. 2006).
P.-Y. Le Traon
44 Table 2.3↜渀 User requirements for SST provision Application area Temperature accuracy (K) 0.2–0.5 1.╇ Weather prediction 2.╇ Climate monitoring 0.1 3.╇ Ocean forecasting 0.2
Spatial resolution (km) 10–50 20–50 1–10
Revisit time
Priority
6–12€h 8€day 6–12€h
High High High
Table 2.4↜渀 Minimum assemblage of missions required to meet the need for operational SST SST mission type Radiometer Nadir Swath width Coverage/ wavebands resolution revisit 3 thermal IR ~2,500€km Day and night ~1€km A.╇Two polar orbiting (3.7, 11, meteorological satellites global 12€μm), 1 with infra-red radiometers. coverage Generates the basic global by each near-IR, 1 Vis coverage satellite ~1€km ~500€km Earth coverB.╇Polar orbiting dual-view 3 thermal IR age in ~4 (3.7, 11, radiometer. SST accuracy days 12€μm), 1 approaching 0.1€K, used as near-IR, 1 reference standard for other Vis, each types with dual view Requires chanC.╇Polar orbiting microwave ~1,500€km ~50€km Earth covernels at ~7 and radiometer optimised for (25€km age in 2 ~11€GHz SST retrieval. Coarse resodays pixels) lution coverage of cloudy regions Sample Earth disk 2–4€km D.╇ Infra-red radiometers on 3 thermal IR interval from (3.7, 11, geostationary platforms. <30€min 36,000€km 12€μm), 1 Spaced around the Earth near-IR, 1 Vis altitude
The priority expressed by the international SST community, through GHRSST, is to continue to provide a type B (ATSR class) sensor. Its on-board calibration system and especially its dual-view methodology allow AATSR to deliver the highest achievable absolute accuracy of SST, robustly independent of factors such as stratospheric aerosols from major volcanic eruptions or tropospheric dust, which cause significant biases in other infra-red sensors. Because its absolute calibration (for dual view) is better than 0.2€K it is used for bias correction of the other data sources before assimilation into models or analyses. A type C sensor (microwave) is also required beyond AMSR-E on Aqua.
2.5.6 Conclusions Satellite SST observations are essential observations for operational oceanography, weather and climate forecasting. SST data are systematically used for global and
2â•… Satellites and Operational Oceanography
45
large scale observations for climate applications and to correct for large scale biases in ocean models (due to forcing field errors). Thanks to GHRSST, major improvements in data processing issues and use of different types of sensors have occurred. New high resolution products (from level 2 to level 4) are now available and used by ocean analysis and forecasting systems. High resolution SST data provide unvaluable information on mesoscale and submesocale phenomena. There is still a lot to do, however, to fully use the high resolution information content of SST observations in ocean models. This is an area of active research.
2.6â•…Ocean Colour 2.6.1 O cean Colour Measurements and Operational Oceanography Over the last decade, the applications of satellite-derived ocean colour data have made important contributions to biogeochemistry, physical oceanography, ecosystem assessment, fisheries oceanography and coastal management (IOCCG 2008). Ocean colour measurements provide a global monitoring of chlorophyll (phytoplankton biomass) and associated primary production. They can be used to calibrate and validate biogeochemical, carbon and ecosystem models. Progress towards assimilation of ocean colour data is less mature than for SST or SSH, but there are already convincing examples of assimilation of Chla in ocean models. Use of K and PAR (see below) is needed to define the in-water light field that drives photosynthesis in ocean ecosystem models and that is required to model and forecast the ocean surface temperature. Data products needed to support ocean analysis and forecasting models of open ocean biogeochemical processes are the concentration of chlorophyll-a (Chla), total suspended material (TSM), the optical diffuse attenuation coefficient (↜K) and the photosynthetically available radiation (PAR). Ocean colour is a tracer of dynamical processes (mesoscale and submesoscale) and this is of great value for model validation. It also plays a role in air-sea CO2 exchange monitoring. At regional and coastal scales, there are many applications that require ocean colour measurements: monitoring of water quality, measurement of suspended sediment, sediment transport models, measurement of dissolved organic material, validation of regional/coastal ecosystem models (and assimilation), detection of plankton and harmful algal blooms, monitoring of eutrophisation…. Use of ocean colour data in coastal seas is, however, more challenging as explained below.
2.6.2 Measurement Principles The sunlight is not merely reflected from the sea surface. The colour of water surface results from sunlight that has entered the ocean, been selectively absorbed,
46
P.-Y. Le Traon
scattered and reflected by phytoplankton and other suspended material in the upper layers, and then backscattered through the surface. The subsurface reflectance R(↜λ) (ratio of subsurface upwelled or water-leaving radiance on incident irradiance) that is the ocean signal measured by a satellite is proportional to b(↜λ)/[a(↜λ)â•›+â•›b(↜λ)] or b(↜λ)/a(↜λ) where b(↜λ) is the backscattering and a(↜λ) the absorption of the different water constituents. Sunlight backscattered by the atmosphere (aerosols and molecular/Rayleigh scattering) contributes actually to more than 80% of the radiance measured by a satellite sensor at visible wavelengths. Atmospheric correction is calculated from additional measurements in the red and near-infrared spectral bands. Ocean water reflects very little radiation at these longer wavelengths (the ocean is close to a black body in the infra-red) and the radiance measured is thus due almost entirely to scattering by the atmosphere. Unlike observations in the infrared or microwave frequencies for which emission is from the sea surface only, ocean colour signals in the blue-green can come from depths as great as 50€m. Sources of ocean colour variations include: • Phytoplankton and its pigments • Dissolved organic material − Coloured Dissolved Organic Material (CDOM or yellow matter) is derived from decaying vegetable matter (land) and phytoplankton degraded by grazing or photolysis. • Suspended particulate matter (SPM) − The organic particulates (detritus) consist of phytoplankton and zooplankton cell fragments and zooplankton fecal pellets. − The inorganic particulates consist of sand and dust created by erosion of landbased rocks and soils (from river runoff, deposition of wind-blown dust, wave or current suspension of bottom sediments). Colour can tell us about relative and absolute concentrations of those water constituents which interact with the light. Hence we measure chlorophyll, yellow substance and sediment load. It is difficult to distinguish independently varying water constituents: • Case 1 waters are where the phytoplankton population dominates the optical properties (typically open sea). Only one component modulates the radiance spectrum backscattered from the water (phytoplankton pigment). Concentration range is 0.03–30€mg€m−3. Water in the near IR is nearly black for blue water. Atmospheric correction that is based on IR frequency measurements is thus relatively simple. Using green/blue ratio algorithms for chlorophyll, of the form Chlaâ•›=â•›A(R550/R490), provides an accuracy for Chla of ~±30% in open ocean. • Case 2 waters are where other factors (CDOM, SPM) are also present. There are multiple independent components in water, which have an influence on the backscattered radiance spectrum. The retrieval procedure has to deal with these
2â•… Satellites and Operational Oceanography
47
multiple components, even if only one should be determined. At high total suspended matter concentrations, problems also occur with atmospheric correction. More complex algorithms (e.g. neural network) and more frequencies are thus required. Although this remains a challenging task, much progress has been made over the past five years. Useful estimations of Chla and SPM can thus be obtained in the coastal zone (e.g. Gohin et€al. 2005). Ocean colour can also provide information on phytoplankton functional types as changes in phytoplankton composition can lead to changes in absorption and backscattering coefficients. This is an area of active research but first results are already promising. An ocean colour satellite should have a minimum number of bands from 400– 900€nm. The role of the various bands is: • 413€nm: Discrimination of CDOM in open sea blue water. • 443, 490, 510, 560€nm: Chlorophyll retrieval from blue-green ratio algorithms. • 560, 620, 665€nm and others: Potential to retrieve water content in turbid Case 2 waters using new red-green algorithms. • 665, 681, 709€nm and others: Use of fluorescence peak for chlorophyll retrieval. • 779, 870€nm for atmospheric correction plus another above 1,000€nm to improve correction over turbid water.
2.6.3 Processing Issues The processing transforms the Level-1 data, normalized radiances observed by the ocean colour radiometer, into geophysical properties corrected from atmospheric effects. Level 2 products include water leaving radiances at different wavelengths, chlorophyll-a concentration of the surface water (usually with case 1 and case 2 algorithms), total suspended matter (TSM), coloured dissolved and detrital organic materials (CDOM), diffuse attenuation coefficient (K) and PAR. Merging of several ocean colour satellites is needed to improve the daily ocean coverage. This requires combining data from individual sensors with different viewing geometries, resolution and radiometric characteristics (Pottier et€al. 2006; Mélin and Zibordi 2007; IOCCG 2007). The availability of merged datasets allows the users to exploit a unique, quality-consistent, time-series of ocean colour observations, without being concerned with the performance of individual instruments.
2.6.4 Operational Oceanography Requirements The needs and the broad classes of colour sensor are summarised in Tables€ 2.5 and 2.6 from Le Traon et€al. (2006). They distinguish categories of use between the needs of the open ocean forecasting models, the finer scale shelf sea and local
P.-Y. Le Traon
48 Table 2.5↜渀 User requirements for ocean colour data products Category of use Optical class Minimum set Accuracy (%) of water of satellitederived variables needed Case 1 Chlor 30 1.╇Assimilation into operational open ocean K 5 models PAR 5 Lw(↜λ) 5 2.╇Ingestion in operaCase 2 K 5 tional shelf sea and PAR 5 local models 5 Lw(↜λ) Chlor 30 TSM 30 CDOM 30 Case 2 K 5 3.╇Data products used directly by marine PAR 5 managers in shelf seas Lw(↜λ) 5 Chlor 30 TSM 30 CDOM 30 Chlor 10–30 4.╇Global ocean climate Case 1 monitoring K 5 PAR 5 5.╇Coastal ocean climate Case 2 Chlor 10–30 monitoring TSM 10–30 CDOM 10–30 PAR 5 K 5 5 Case 2 Lw(↜λ) 6.╇Coastal and estuarine water quality monitoring Table 2.6↜渀 Classes of ocean colour sensor Class Orbit Sensor type A
Polar
B
Polar
C
Geostationary
Spatial resolution (km)
Revisit time
2–4
1–3 days
0.5–2
1 day desired, but 3–5 days useful
0.25–1
1 day desired, but 3–5 days useful
5–10
8 day average
5
8 day average
0.1–0.5
0.5–2€h
Revisit time Spatial resolution 3 days 1€km
SeaWiFS type multispectral scanner, 5-8 Vis-NIR wavebands Imaging spectrometer 3 days (MERIS/MODIS type) Radiometer or spectrometer— 30€min feasibility to be determined
0.25–1€km
Priority High High
100€m–2€km Medium
models, and those operational end users who analyse the data directly rather than through assimilation into a model system. There is a variety of additional products desired in coastal waters depending on the local water character. These include the coloured dissolved organic material (CDOM) and the discrimination of different
2â•… Satellites and Operational Oceanography
49
functional groups of phytoplankton. Some operational users prefer to use directly the atmospherically corrected water leaving radiance, Lw(↜λ) (defined over the spectrum of given wavebands), applying their own approach for deriving water quality information or for confronting a model. Climate applications (categories 4 and 5) are envisaged to be derived from the operational categories 1 and 2 respectively, trading spatial and temporal resolution for improved accuracy. Category 6 is included in Table€2.5 to represent those users needing to monitor estuarine processes in fine spatial detail and to resolve the variations within the tidal cycle. This is a much more demanding category than the others. A Class A simple SeaWiFS-like instrument with a resolution of 1€km and a set of 5 or 6 wavebands would be adequate for user categories 1 and 4, to monitor global chlorophyll for assimilation into open ocean ecosystem models and for monitoring global primary production. It would fail to meet the main requirement to monitor water quality in coastal and shelf seas represented by user categories 2 and 3. These require a Class B imaging spectrometer sensor. In order to satisfy the ocean colour measurement requirements for operational oceanography, the minimum requirement is for one Class B sensor and at least one other sensor (Class A, B or C). The Class C sensor corresponds to an imaging spectrometer on a geostationary platform. As well as uniquely serving the user category 6 by resolving variability within the tidal cycle, it also serves other user categories in cloudy conditions by exploiting any available cloud windows that occur during the day.
2.6.5 Conclusions While ocean colour is now more and more used for operational applications (e.g. water quality), the development lags behind other remote sensing methods. This is because it is inherently difficult to retrieve ocean variables accurately and confidently. The potential of ocean colour measurements to calibrate or improve global, regional and coastal biogeochemical models is, however, considerable. The information content is very rich. This is a scientific and technical challenge. We are just beginning to use ocean colour products in ocean models. This is a challenging subject and it should be a high priority research topic for operational oceanography.
2.7â•…Other Techniques 2.7.1 Synthetic Aperture Radar SAR is an active instrument that transmits/receives electromagnetic radiation. It operates at microwave (or radar) frequencies. Wavelengths are in the range of 2–30€cm corresponding to frequencies in the range of 15–1€GHz. It works in the presence
50
P.-Y. Le Traon
of clouds, day and night. Synthetic aperture principle is to generate a very long antenna through the motion of the platform. For ASAR the length of the synthetic antenna is approximately 20€km. This leads to very high resolution. The surface roughness is the source for the backscatter of the SAR signal. The signal that arrives at the antenna is registered both in amplitude and phase. Although the SAR sees only the Bragg waves (↜λBâ•›=â•›λ/2 sin , where is the incidence angle, λ the radar wavelength and λB the resonant Bragg wavelength) these waves are modulated by a large number of upper ocean and atmospheric boundary layer phenomena. This is the reason why SAR images express wave field, wind field, currents, fronts, internal waves and oil spill. They also provide high resolution images of sea ice (see next section).
2.7.2 Sea Ice Passive microwave (PM) data from the SSM/I instrument is the backbone of operational sea ice observations. Daily Arctic and Antarctic analysis of ice concentration are delivered in near real time from operational centers such as NCEP and the OSI SAF. These types of datasets are today assimilated in operational ocean model systems. Improved resolution and more detailed ice edge estimates are obtained by use of scatterometer data (e.g. QuickScat) and new PM data from AMSR-E. Ice drift information based on successive satellite passages from these instruments are also assimilated in ocean/ice models. High resolution sea ice information is derived from SAR data and images from optical and IR instruments. Operational services for offshore industry, shipping and safety in polar regions rely on regular iceberg detection and sea ice type, extent and deformation monitoring at a spatial resolution (~50–100€m) that is only feasible with spaceborne SAR. Although ice coverage and ice motion is well observed there is still lack of regular information about the variation in ice volume. The ice thickness measurements from the advanced altimeter on Cryosat-II are thus very much welcomed (launched in April, 2010).
2.7.3 Satellite Winds Scatterometers (e.g. Seawinds/Quickscat, ASCAT/MetOp) are radars operating at C or Ku bands. The main ocean parameters measured is the wind speed and direction. They also provide useful information on sea ice roughness. Principle is based on the resonant Bragg scattering. For a smooth surface, oblique viewing of the surface with active radar yields virtually no return. When wind increases, so does surface roughness and the reflected signals towards the satellite sensor. The wind direction can be derived because of the azimuthal dependence of the reflected signal with respect to the wind direction.
2â•… Satellites and Operational Oceanography
51
To enhance the spatial and temporal resolutions of surface wind, several attempts have been made to merge the remotely sensed data to the operational NWP wind analyses over the global oceans. More details about data and processing methods can be found in Bentamy et€al. (2007).
2.7.4 A New Challenge: To Estimate Sea Surface Salinity from Space At L-band (1.4€GHz), brightness temperature (BT) is mainly affected by ocean surface emission (atmosphere is almost transparent): BTâ•›=â•›eâ•›·â•›SSTâ•›=â•›(1-R) SST where BT is brightness temperature an e sea surface emissivity. R (θ, SSS, SST, U…) is the reflexion coefficient (see Sect.€2.3). R depends on sea water permittivity and thus on sea surface salinity. Sensitivity is maximum at L-band. It is, however, very low (0.2–0.8 K/psu) and increases with sea surface temperature. The SMOS satellite was launched in November 2009. It is an L-band radiometer that measures of brightness temperature at different incidence angles (0–60°). SMOS is a synthetic aperture radiometer which provides a high spatial resolution (∼40€km precision 1€psu). SSS accuracy of 0.1–0.2€psu over 200â•›×â•›200€km and 10 days areas is achieved through averaging of individual measurements. The Aquarius satellite will be launched in 2011. It is a conventional L-band radiometer operating at 3 incident angles. Aquarius includes a L-band scatterometer to correct for sea surface roughness effects.
2.8â•…Concluding Remarks The chapter provides only a very brief summary of ocean remote sensing measurement principles. More information can be found in Fu and Cazenave (2001), Robinson (2004) and Martin (2004) books. Satellite data play a fundamental role for operational oceanography. They are mandatory to constrain ocean models through data assimilation and they provide directly useable data products for applications. Over the past 10 years, new and improved data sets and products needed by the modeling and data assimilation systems and for applications have been developed. Accuracy and timeliness of products have been improved. This has resulted in a larger and more systematic assimilation of satellite data into ocean models. Sampling and error characteristics, measurement content must be well understood, however, for a proper use in ocean models. In-situ data are also mandatory to calibrate, validate and complement satellite observations. There are still a series of advances in satellite oceanography that are expected to impact operational oceanography and its applications: • Continuous data processing improvements are needed so that data sets and products evolve according to requirements from modeling and data assimilation systems (including error characterization).
52
P.-Y. Le Traon
• New satellite missions for SSS (SMOS, Aquarius) and gravity (GOCE) and high resolution altimetry (SWOT) will likely have a major impact on operational oceanography. • Better management of the huge amount of data coming from various instruments is needed. We need to exploit the data in an efficient way. New tools to search, process and visualize data from different sources are required. • We are not fully exploiting the information content of satellite observations. Most observations are not yet sufficiently explored and used in ocean models. Synergy between observations (satellite, in-situ), models and new theories should be developed further. This is needed, in particular, to better exploit the high resolution information in satellite observations (e.g. Isern-Fontanet et€al. 2006).
2.9â•…Useful URLs This is a non exhaustive list of WWW sites where general information, data sets and products, softwares and toolboxes for satellite oceanography missions can be obtained. Information on existing and future satellite missions: CEOS WWW site: http://www.eohandbook.com/ Satellite altimetry: http://www.aviso.oceanobs.com http://topex-www.jpl.nasa.gov Ocean colour: http://www.ioccg.org http://oceancolour.gsfc.nasa.gov http://www.globcolour.info Sea surface temperature: http://www.ghrsst.org http://www.remss.com Multi-mission satellite data processing and distribution centers or facilities: http://www.aviso.oceanobs.com/ http://podaac-www.jpl.nasa.gov/ http://www.myocean.eu.org/ http://cersat.ifremer.fr/ http://www.osi-saf.org/ Software and toolboxes: • The European Space Agency has developed a series of toolboxes to facilitate the visualization and processing of satellite observations (ocean colour, SST, altimetry, SAR, gravimetry). http://earth.esa.int/resources/softwaretools
2â•… Satellites and Operational Oceanography
53
• SeaDAS is a NASA comprehensive image analysis package for the processing, display, analysis, and quality control of ocean colour data. http://oceancolour. gsfc.nasa.gov/seadas • Supported by UNESCO, Bilko is a complete system for learning and teaching remote sensing. http://www.noc.soton.ac.uk/bilko
References Bentamy A, Ayina H, Queffeulou P, Croize-Fillon D, Kerbaol V (2007) Improved near real time surface wind resolution over the Mediterranean sea. Ocean Sci 3(2):259–271 Bowen M, Emery WJ, Wilkin J, Tildesley P, Barton I, Knewtson R (2002) Extracting multi-year surface currents from sequential thermal imagery using the maximum cross correlation technique. J Atmos Ocean Technol 19:1665–1676 Chelton DB (2005) The impact of SST specification on ECMWF surface wind stress fields in the eastern tropical Pacific. J Clim 18:530–550 Chelton DB, Ries JC, Haines BJ, Fu LL, Callahan P (2001) Satellite altimetry. In: Fu LL, Cazenave A (eds) Satellite altimetry and earth sciences. Academic Press, San Diego Clark C, Wilson W (2009) An overview of global observing systems relevant to GODAE. Oceanogr Mag 22(3):22–33 (Special issue on the revolution of global ocean forecasting—GODAE: ten years of achievement) Donlon C, Robinson IS, Reynolds M, Wimmer W, Fisher G, Edwards R, Nightingale TJ (2008) An infrared sea surface temperature autonomous radiometer (ISAR) for deployment aboard Volunteer Observing Ships (VOS). J Atmos Ocean Technol 25:93–113 Donlon CJ, Casey KS, Robinson IS, Gentemann CL, Reynolds RW, Barton I, Arino O, Stark J, Rayner N, LeBorgne P, Poulter D, Vazquez-Cuervo J, Armstrong E, Beggs H, Llewellyn-Jones D, Minnett PJ, Merchant CJ, Evans R (2009) The GODAE high-resolution sea surface temperature pilot project, Oceanogr 22(3):34–45 Ducet N, Le Traon PY, Reverdin G (2000) Global high resolution mapping of ocean circulation from the combination of TOPEX/POSEIDON and ERS-1/2. J Geophys Res 105(C8):19477– 19498 Fu LL, Cazenave A (2001) Satellite altimetry and earth sciences. Academic Press, San Diego Gohin F, Loyer S, Lunven M, Labry C, Froidefond JM, Delmas D, Huret M, Herbland A (2005) Satellite-derived parameters for biological modelling in coastal waters: illustration over the eastern continental shelf of the Bay of Biscay. Remote Sens Environ 95(1):29–46 Guinehut S, Le Traon PY, Larnicol G (2006) What can we learn from global altimetry/hydrography comparisons? Geophys Res Lett 33, L10604. doi:10.1029/2005GL025551 Guinehut S, Coatanoan C, Dhomps A-L, Le Traon PY, Larnicol G (2008) On the use of satellite altimeter data in argo quality control. J Atmos Ocean Technol 26(2):395–402 IOCCG (2007) Ocean colour data merging. In: Gregg WW (ed) with contribution by Gregg W, Aiken J, Kwiatkowska E, Maritorena S, Mélin F, Murakami H, Pinnock S, Pottier C IOCCG monograph series, report no. 6. p€68 IOCCG (2008) Why ocean colour? The societal benefits of ocean-colour technology. In: Platt T, Hoepffner N, Stuart V, Brown C (eds) Reports of the International Ocean-Colour Coordinating Group, No.€7. IOCCG, Dartmouth, p€141 Isern-Fontanet J, Chapron B, Lapeyre G, Klein P (2006) Potential use of microwave sea surface temperatures for the estimation of ocean currents. Geophys Res Lett 33, L24608. doi:10.1029/2006GL027801 Le Traon PY, Ogor F (1998) ERS-1/2 orbit improvement using TOPEX/POSEIDON: the 2€cm challenge. J Geophys Res 103:8045–8057 Le Traon PY, Nadal F, Ducet N (1998) An improved mapping method of multisatellite altimeter data. J Atmos Ocean Technol 15:522–533
54
P.-Y. Le Traon
Le Traon PY, Rienecker M, Smith N, Bahurel P, Bell M, Hurlburt H, Dandin P (2001) Operational oceanography and prediction—a GODAE perspective. In: Koblinsky CJ, Smith NR (eds) Observing the oceans in the 21st century. GODAE project office, Bureau of Meteorology, Melbourne, pp€529–545 Le Traon PY, Johannessen J, Robinson I, Trieschmann O (2006) Report from the Working Group on space infrastructure for the GMES marine core service. GMES Fast Track Marine Core Service Strategic Implementation Plan. Final Version, 24/04/2007 Le Traon PY, Larnicol G, Guinehut S, Pouliquen S, Bentamy A, Roemmich D, Donlon C, Roquet H, Jacobs G, Griffin D, Bonjean F, Hoepffner N, Breivik LA (2009) Data assembly and processing for operational oceanography: 10 years of achievements. Oceanogr Mag 22(3):56–69 (Special issue on the revolution of global ocean forecasting—GODAE: ten years of achievement) Martin S (2004) An introduction to ocean remote sensing. Cambridge University Press, Cambridge. ISBN-13: 9780521802802, ISBN-10: 0521802806 Mélin F, Zibordi G (2007) An optically-based technique for producing merged spectra of water leaving radiances from ocean colour remote sensing. Appl Opt 46:3856–3869 Mitchum GT (2000) An improved calibration of satellite altimetric heights using tide gauge sea levels with adjustment for land motion. Mar Geod 23:145–166 O’Carroll AG, Eyre JR, Saunders RW (2008) Three-way error analysis between AATSR, AMSRE, and in situ sea surface temperature observations. J Atmos Ocean Technol 25:1197–1207 Oke PR, Balmaseda MA, Benkiran M, Cummings JA, Fujii Y, Guinehut S, Larnicol G, Le Traon PY, Martin MJ, Dombrowsky E (2009) Observing system evaluation. Oceanogr Mag 22(3):144–153 (Special issue on the revolution of global ocean forecasting—GODAE: ten years of achievement) Pascual A, Faugere Y, Larnicol G, Le Traon PY (2006) Improved description of the ocean mesoscale variability by combining four satellite altimeters. Geophys Res Lett 33(2), L02611, doi:10.1029/2005GL024633 Pascual A, Boone C, Larnicol G, Le Traon PY (2009) On the quality of real time altimeter gridded fields: comparison with in situ data. J Atmos Ocean Technol 26:556–569 Pottier C, Garçon V, Larnicol G, Sudre J, Schaeffer P, Le Traon PY (2006) Merging SeaWiFS and MODIS/aqua ocean colour data in north and equatorial Atlantic using weighted averaging and objective analysis. IEEE Trans Geosci Remote Sens 44:3436–3451 Robinson I (2004) Measuring the oceans from space: the principles and methods of satellite oceanography. Springer, Berlin, p€669 Smith N, Lefebvre M (1997) The global ocean data assimilation experiment (GODAE). Paper presented at Monitoring the Oceans in the 2000s: an integrated approach. Biarritz, France, 15–17 Oct 1997
Chapter 3
In-Situ Ocean Observing System Muthalagu Ravichandran
Abstract╇ Ocean Observing systems consist of in-situ and satellite based technique to detect, track, and predict changes in physical, chemical, geological and biological processes. In-situ observing systems have both Eulerian (based on fixed locations) and Lagrangian (whose location varies with time) systems. The elements of in-situ observing system in terms of their principle, capability to observe the ocean, technology and some of the applications pertaining to physical variables are described. A brief status on Indian Ocean Observing system (IndOOS) is also described. The strengths and weaknesses of each platform and the need for integrating different observational platforms/sensors are highlighted.
3.1â•…Introduction The knowledge of the ocean is essential for many stakeholders dealing with climatology, fisheries, ports and harbours, coastal zone management, navy and coast Guard organizations, public health institutions, environmental agencies, tourism industry, weather forecasters, offshore mining and oil industries and climate research. Ocean observing systems has a central role to deliver ocean services to the society. However, data produced by these systems need to be translated into ocean information services by analysis systems and also assimilated in ocean general circulation models to deliver past, present and future state of the ocean and also different products required by user agencies. A distributed or centralized data management system is critical to timely delivery of Ocean services. Ocean observation systems consist of (a) in-situ measurements, using sensors mounted on ships, buoys, moorings, coastal stations to capture changes in time and depth at specific points or tracks and (b) remote sensing systems such as satellites, aircraft, radar, M. Ravichandran () Indian National Centre for Ocean Information Services (INCOIS), Ministry of Earth Sciences, Post Box No. 21, IDA Jeedimetla, Hyderabad 500055, India e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_3, ©Â€Springer Science+Business Media B.V. 2011
55
56
M. Ravichandran
etc to capture the spatial and temporal variations synoptically, as manifested at the surface. Remote sensing in general and satellite measurements in particular (Le Traon PY, this volume) provide horizontal distribution of surface variables, such as temperature, sea surface height, ocean color, as well as several meteorological parameters for the calculation of air-sea momentum, heat and fresh water fluxes (Masumoto et€al. 2009). These satellite data enable studies of phenomena across a very wide range of time scales, from intraseasonal to decadal and complement the in-situ observing systems. Ocean observations also help answering some fundamental research questions, such as identified by National Science Foundation (NSF) reports (NSF 2001; Koblinsky and Smith 2001). They are (a) determining the role of ocean on climate and climate change, (b) quantifying the exchange of heat, water, momentum and gases between the ocean and atmosphere, (c) determining the cycling of carbon in the oceans and the role of the oceans in moderating the increase in atmospheric carbon dioxide, (d) improving models of ocean mixing and large-scale ocean circulation, (e) understanding the patterns and controls on biological diversity in the oceans, (f) determining the origin, development and impact of episodic coastal events such as harmful algal blooms, (g) assessing the health of the coastal ocean, (h) determining the nature and extent of microbial life in the deep crustal biosphere, (i) studying subduction zone thrust faults that may result in large, tsunamigenerating earthquakes and (j) improving models of global earth structure and core-mantle dynamics. Climate research became a major focus of scientific debate/discusstion by the latter half of the twentieth century, especially after the identification of the impact of green house gases and global warming on Earth’s climate system. Many countries, both the developed and developing ones, are spending considerable amount of their resources for climate research so that governments and society can take appropriate steps in planning and development. A sustained observation program to detect, track, and predict changes in physical, chemical, geological and biological systems and their effects is needed to measure the impacts of humans on the ocean as well as the impact of the human activity. The ocean, comprising over 70% of the surface of the planet, is currently monitored far less effectively and completely than terrestrial systems, yet humans depend strongly on the sea as a source of food and for transportation and trade, among many other uses. Further, the ocean strongly affects large-scale weather patterns, such as El-Niño and Sothern Oscillation (ENSO), Indian Ocean Dipole (IOD), etc. In order to understand and ultimately predict how the ocean-atmosphere interaction affects weather and climate, and how human activities affect both the physical system and living marine resources, an integrated ocean observing system is needed to monitor the ‘state’ of the ocean. Just as continuous measurements of weather and climatic conditions are maintained on land, similarly sustained measurements of the ocean are required to monitor change and to assist in understanding and predicting its impacts. There are two different classes of in-situ observing systems—those based on fixed points (Eulerian) and those whose location varies with time (Lagrangian). Fixed point observations are made either from moorings or from repeated occu-
3â•… In-Situ Ocean Observing System
57
pation of stations. Observations whose location varies with time are made from platforms that move as a result of the motion of the ocean or of a moving vessel. Some moving platforms are thought to follow the motion of water parcels fairly well. Successful operation of a global in-situ observing system requires that there be coordination of activities on a number of levels. Sensors and best practices learned from other experiences need to be agreed. Deployment opportunities need to be identified and instruments delivered to take advantage of them; where no opportunistic deployment is feasible, timely provision of special deployment efforts needs to be made. The data coverage of the system needs to be monitored along with sensor lifetimes and provision made to anticipate where gaps will appear so that deployment can be arranged. Successful implementation depends fundamentally upon near-real time transmission of both observations and relevant metadata. Given that a number of nations participate in each of the observing networks and both ‘operational’ and ‘research’ programs are involved, this monitoring/system management function is non-trivial and critical (Clark and Wilson 2009). Though some of the ocean processes can be addressed and described using local observations, many processes need to be addressed using observations from other locations since remote forcing may play an important role. Accounting for remote forcing effects would require observing all basins. But no country can afford to have observations in all basins. Hence, many national and regional programs are networked through the United Nations. The Global Ocean Observing System (GOOS) is an oceanographic component of Global Earth Observing System of Systems (GEOSS). It is a system of programmes, each of which is working on different and complementary aspects, for establishing an ocean observation capability for all of the world’s nations. UN sponsorship and UNESCO assemblies assure that international cooperation is always the first priority of the Global Ocean Observing System. GOOS is designed to (1) monitor, understand and predict weather and climate, (2) describe and forecast the state of the ocean, including living resources, (3) improve management of marine and coastal ecosystems and resources, (4) mitigate damage from natural hazards and pollution, (5) protect life and property on coasts and at sea and (6) enable scientific research. GOOS is sponsored by the Intergovernmental Oceanographic Commission (IOC), the United Nations Environment Program (UNEP), the World Meteorological Organisation (WMO) and the International Council for Science (ICSU), and implemented by member states via their government agencies, navies and oceanographic research institutions working together in a wide range of thematic panels and regional alliances. More detail about GOOS can be found at http://www.ioc-goos.org/. The Joint Technical Commission for Oceanography and Marine Meteorology (JCOMM) of the WMO and IOC provides coordination at the international level for oceanographic and marine observations from all in-situ observing systems. The present status of location of different elements of in-situ observing system is available at http://wo.jcommops. org/cgi-bin/WebObjects/JCOMMOPS. An in-situ observing system consists many elements such as tide gauges, ship based marine meteorology from Voluntary Observing Ships(VOS), Ships of Opportunity (SOOP) based XBT/XCTD sections, repeat hydrography, drifting and
58
M. Ravichandran
moored buoys, acoustic tomography, argo profiling floats, gliders, etc. Each element has some advantages and disadvantages in terms of temporal and spatial resolutions. Integrating all the elements, sustaining and improving the different components of observing system to meet the evolving needs for societal benefits is an imperative need for ocean observing system. Though the sensors used in these platforms/elements records primarily physical variables, the time has come to have multi-disciplinary approach to understand the total system. In the following sections, the elements of different observing systems pertaining to physical variables are explained in terms of its capability to observe the ocean, technology and some of its applications. The implementation plan for one of the poorly observed Indian Ocean is briefed in Sect.€3.3. The strengths and weaknesses of each platform and the final concluding remarks emphasizing the requirement of optimal mix of different in-situ platforms to deliver meaningful information are presented in Sect.€3.4.
3.2â•…Elements of Observing System 3.2.1 Tide Gauges The measurement of changes in sea level to understand the mechanisms responsible for phenomena such as the tides and the catastrophic floods due to storms and tsunami was performed by the observers of the Ocean from ancient times. It is now realized that sea level changes are important on all timescales from seconds (due to wind waves) through to millions of years (due to the movement of continents). The devices employed to make sea level changes (relative to the level of the land where the instrument is located) are usually called tide gauges. It is based on the principles of well-known float gauge in a stilling well, the measurement of subsurface pressure, or of the time-of-flight of a pulse of sound, or of a pulse of radar. The classical and most reliable method of measuring the sea level is by tide staff, but it is prone to manual errors. Subsequently, the float based tide gauges have been used extensively for long time. However, such systems require supporting structures, shelters and regular maintenance. The other commonly used types are pressure sensor gauges (differential/absolute) in which sensors are mounted directly in the sea. However, this require knowledge of atmospheric pressure (in case of absolute pressure sensor), seawater density and gravitational acceleration to make the conversion from pressure to sea level. In spite of the above lacuna, the instruments have many practical advantages as sea level recorders. In late 1990s, radar devices, which were mainly used in process technology, were introduced into hydrometry. Though satellite based altimeter provides mean sea level anomaly in the open ocean with coarse temporal resolution, the information from gauges is essential for understanding local mean sea level trends and extremes. Also, gauges data are required to provide precise calibration of radar altimetry. Apart from this, tide gauges have a long history and healthy future (IOC manual 2006) with many applications both in operational and scientific research.
3â•… In-Situ Ocean Observing System
59
The observed sea level consists of periodic geophysical forces such as mean sea level, a tidal signal and meteorological residuals. Each of these components is controlled by separate physical processes and the variations of each part are essentially independent of the variations in the other parts. Tides are the periodic movement of the seas which have coherent amplitude and phase relationship to some periodic geophysical force. The dominant forcing is the variation in the gravitational field on the surface of the earth due to the regular movements of the earth-moon and earthsun systems. These cause gravitational tides. There are also weak tides generated by periodic variations of atmospheric pressure and on-shore/off-shore winds which are called atmospheric tides. Meteorological residuals are the non-tidal components of sea level which remain after removing the tides by analysis. They are irregular, as are the variations in the weather. Sometimes the term “surge residual” is used but more commonly surge is used to describe a particular event during which a very large non-tidal component is generated. Mean sea level is the average level of the sea, usually based on hourly values taken over a period of at least a year. For geodetic purposes the mean sea level may be taken over several years. More elaborate techniques of analysis allow the energy in seal level variations to be split into a series of frequency or spectral components. The main concentration of energy is in the semidiurnal and diurnal tidal bands, but there is a continual background of meteorological energy which becomes more important for longer periods or lower frequencies. The Global Sea Level Observing System (GLOSS) (http://www.gloss-sealevel. org/) was established in 1985 by IOC to provide oversight and coordination for global and regional sea level networks in support of oceanographic and climate research. GLOSS remains under the auspices of the IOC and is one of the observing components of JCOMM. GLOSS is an example of a global coastal observing network and has the largest participation of member states (~70) among the existing observing elements in GOOS. Tide gauge data from the GLOSS networks are assembled and archived at two data centers (Merrifield et€ al. 2009). The British Oceanographic Data Center (BODC, http://www.bodc.ac.uk/) is responsible for delayed mode datasets. The main archive for historic, monthly-averaged, sea level records from tide gauges from around the world is available at Permanent Service for Mean Sea Level (PSMSL, http://www.pol.ac.uk/psmsl/) (Woodworth and Player 2003). Figure€3.1 shows the present status of reporting of the sea level gauges in the GLOSS Core Network (Merrifield et€al. 2009). Estimates of twentieth century sea level rise are primarily based on the historical tide gauge data maintained by the PSMSL. Church et€al. (2004) estimated monthly distributions of large scale sea level variability and change over the period 1950–2000 using historical tide gauge data and altimeter data sets. Annual averages of the global mean sea level (millimeter) as derived from analyses of tide gauges shows a global rise of 1.8â•›±â•›0.3€mm/year during 1950–2000. Tide gauges have also been used to monitor the stability of satellite altimeter sea surface height observations, long term sea level trends at coastal stations, navigation, hydrography, flood warning, tsunami warning and other coastal engineering applications.
60
M. Ravichandran
Fig. 3.1↜渀 Status of reporting of the sea level gauges in the GLOSS Core Network in 2009. Near real-time stations (↜blue) provide data typically within 1€h of collection; fast delivery (↜green) within one month. Delayed mode low frequency data within 5 years (↜yellow) or greater (↜orange) include monthly averages provided to the Permanent Service for Mean Sea Level (PSMSL). (Source: Merrifield et€al. 2009)
3.2.2 Voluntary Observing Ships The Voluntary Observing Ships (VOS) scheme is an international programme comprising member countries of the WMO/IOC that recruit ships to take, record and transmit marine meteorological observations whilst at sea. The VOS Scheme is a core observing programme of the Ship Observations Team (SOT) in the Observations Programme Area of JCOMM. There are three types of ships in the VOS Scheme such as selected ships, supplementary ships and auxiliary ships. A selected ship is equipped with sufficient certified meteorological instruments for making observations, transmits regular weather reports and enters the observations in meteorological logbooks. Most of the VOS are selected ships. A supplementary ship is equipped with a limited number of certified meteorological instruments for making observations, transmits regular weather reports and enters the observations in meteorological logbooks. An auxiliary ship is without certified meteorological instruments and transmits reports in a reduced code or in plain language, either as a routine or on request, in certain areas or under certain conditions. Auxiliary ships usually report from data-sparse areas outside the regular shipping lanes. Currently, VOS typically report every six or three hours interval, and make observations of surface wind speed and direction, air temperature, humidity, sea surface temperature (SST), atmospheric sea level pressure (SLP), cloud (including type, amount and height), wave and swell parameters and weather (including
3â•… In-Situ Ocean Observing System
61
visibility) information. The data are sent to a meteorological service as soon as they are obtained, either by radio telephony to a coastal radio station, by telex over radio, or by INMARSAT-C. Around 5,000 ships are presently reporting marine meteorological parameters. Observations, such as sea ice and precipitation can also be reported. The temperature (air and SST), humidity and SLP are measured in-situ by meteorological instruments, whilst waves, clouds and weather types are estimated visually. Wind reports are a mixture of measurements and visual estimates. The observations are transmitted in real time and also recorded in paper or, with increasing frequency, electronic logbooks. The electronic logbook software is also used to format manual observations, calculate more uniformly (e.g. dewpoint, true wind) and perform simple quality control (Kent et€al. 2009). Automated weather stations (AWSs) are being installed on VOS in increasing numbers, resulting in more frequent observations. However, a systematic programme of intercomparison with the traditional observations to ensure data continuity in keeping with GCOS monitoring principles is presently lacking. Moreover, a full high-quality AWS is expensive and some national services install low cost systems making only a subset of the normal range of observations, typically SLP and one or two other variables. Some elements of the VOS report require manual input, typically the visual estimates. Convincing the observers that supplementing the reports with this vital information is worthwhile has proved challenging, and the introduction of AWSs has led to a marked decline in the proportion of reports containing these parameters. Adding the capability for manual input adds to both the cost and complexity of the systems and is not always judged to be cost-effective. The surface ocean observing system has evolved rapidly over the past half century, from being primarily VOS-based through the 1960s, to comprising increasing numbers of moored and drifting buoy observations starting in the 1970s and particularly dominating the last decade. Kent et€al. (2009) show an example of how the number of in-situ observations available in the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) has changed over time for selected variables, with the impact of the drifting buoys clearly visible for SST. These in-situ observations have been complemented by satellite measurements that began in the late 1970s. To meet the needs of applications such as weather forecasting, VOS observations are transmitted in real time to the National Meteorological and Hydrological Services (NMHSs) who then share the observations with other services using the Global Telecommunications System (GTS). Some NMHSs keep an archive of the data extracted from the GTS, however, these can differ between services due to differences in data conversion and storage formats and the way in which the data are retrieved from the GTS. VOS data contain fairly large random uncertainties, but in many regions the mean uncertainty due to poor sampling is much larger (Kent and Berry 2008; Gulev et€al. 2007). In well-sampled regions the random uncertainties in gridded datasets will be small as many observations can be averaged. Sampling by multiple platforms gives the potential for extensive quality assurance, including near neighbour “buddy checks” and analysis of outliers. Typically VOS grid box averages contain observations from multiple platforms, allowing random uncertainty and also ship-to-ship biases to be reduced by the averaging process.
62
M. Ravichandran
Datasets and analyses based on ICOADS are highly cited in the literature and form an important resource for climate researchers, especially those interested in large-scale estimates of ocean-atmosphere exchange of heat, freshwater and momentum and multidecadal climate variability. Datasets using VOS observations— in many cases based on the ICOADS collection include SST, sea level pressure, air temperature and humidity, surface fluxes and surface waves. In addition, it should be noted that atmospheric model reanalyses, which are widely used for climate analysis, are heavily dependent on the assimilation of ship observations (Trenberth et€al. 2009). National and international assessments of climate change, most prominently by the Intergovernmental Panel on Climate Change (IPCC), use VOS SST data in the assessment of global mean surface temperature changes. Confidence in the SST trend is increased by its consistency with the marine surface air temperature trend, which is an independent measurement. VOS are the major source of air temperature information over the ocean and also contribute to the monitoring of climate change, for example in the bias-adjustment of infrared satellite estimates of SST (e.g., Reynolds et€al. 2005). VOS also provide a consistent record of cloud changes since 1949 and have been used to derive a century-long analysis of wave information. The continuing move to produce data products in a timely manner from VOS data should allow an enhanced climate monitoring role for the VOS, if sampling can be maintained or improved. However, VOS datasets are currently underutilized for calibration and validation. New higher-resolution datasets characterised by uncertainty estimates should have wide applications for calibration and validation.
3.2.3 Ships of Opportunity The primary objective of the Ship-of-Opportunity Programme (SOOP) is to fulfill the XBT upper ocean data requirements established by the international scientific and operational communities, which can be met at present by measurements from ships of opportunity. The annual assessment of transect sampling is undertaken by the Joint WMO-IOC Technical Commission for Oceanography and Marine Meteorology (JCOMMOPS) on behalf of the Ship Of Opportunity Programme Implementation Panel (SOOPIP). Data management is taken care of through the Global Temperature Salinity Profile Programme (GTSPP) (Goni et€al. 2009). The SOOP is directed primarily towards the continued operational maintenance and co-ordination of the XBT ship of opportunity network but other types of measurements are also being made (e.g. TSG, XCTD, CTD, ADCP, pCO2, phytoplankton concentration). This network in itself supports many other operational needs (such as for fisheries, shipping, defense, etc.) through the provision of upper ocean data for data assimilation in models and for various other ocean analysis schemes. One of the continuing challenges is to optimally combine upper ocean thermal data collected by XBTs with data collected from other sources such as mooring arrays, Argo, and satellites (e.g. AVHRR, altimeter, etc.). However, it is considered most important
3â•… In-Situ Ocean Observing System
63
to have the SOOP focused on supporting climate prediction in order to ensure the continued operation of the present network. XBT (Expendable BathyThermograph) is an expendable temperature and depth profiling system. It is typically comprised of an acquisition system onboard the ship, a launcher, and a expandable temperature probe. The falling probe is linked to the acquisition system through a thin insulated conductive wire which is used to transmit the temperature data back to the acquisition system in real time. Depth is deduced from elapsed time using a well calibrated fall rate equation (about 6.5€m/s). Processed profile data can be transmitted in real-time through satellite. The realtime data is being archived at Coriolis data center, Brest, France and the delayed mode data at GTSPP managed by NOAA/NODC. Profiles as deep as 1,000€ m and comprising (T, D) data points every meter can be made although with usual probes depths range from 500 to 800€m. Accuracy is normally better than 5€m for depth, and better than 0.05°C for temperature. The Global XBT network containing OceanObs’99 recommendations and current proposed transects recommended in OceanObs’09 is shown in Fig.€3.2. The scientific and operational communities deploy approximately 23,000 XBTs every year. In a typical year, 50% are deployed in the Pacific Ocean, 35% in the Atlantic Ocean and 15% in the Indian Ocean. Profiles from about 90% of the XBT deployments are transmitted in real-time, which represent around 25% of the current real-time vertical temperature profile observations (not counting the continuous temperature profiles made by some moorings). XBT operates three modes of deployment: (a) High Density (HD): 4 transects per year, 1 XBT deployment every approximately 25€km (35 XBT deployments per day with a ship speed of 20€kts), (b) Frequently repeated (FR): 12–18 transects per year, 6 XBT deployments per day (every 100–150€km) and (c) Low Density (LD): 12 transects per year, 4 XBT deployments per day. The HD transects extend from ocean boundary (continental shelf) to ocean boundary, with temperature profiling at spatial separations that vary from 10 to 50€km in order to resolve boundary currents and to estimate basin-scale geostrophic velocity and mass transport integrals. PX06 (Auckland to Fiji), which began in 1986, is the earliest HD transect in the present network with more than 90 realizations. Some transects are being assessed for their contribution in this mode. For example, the CLIVAR IOP noted that further work is required to assess the value of IX10, which transects the openings of the Bay of Bengal and the Arabian Sea. Scientific objectives of HD sampling and examples of research targeting these objectives are outlined in Goni et€al. (2009). The FR transects cross major ocean current systems and thermal structures. In some cases, for currents near a continental boundary an extra profile that crosses the 200€m depth contour is made to mark the inshore edge of the current. The FR transects are selected to observe specific features of thermal structure (e.g. thermocline ridges), where ocean atmosphere interaction is strong. Estimates of geostrophic velocity and mass transport integrals across the currents are made by low pass mapping of temperature and dynamical properties on the section. The proto-types of FR transects are IX01 and PX02, which now have time series extending more than
64
M. Ravichandran
Fig. 3.2↜渀 (↜top) XBT network containing OceanObs’99 recommendations and (↜bottom) proposed transects in OceanObs’09. XBT observations transmitted in (↜red) real and (↜blue) delayed-time in 2008. (Source: Goni et€al. 2009)
20 years. The earliest transect from Fremantle to Sunda Strait (Indonesia) began in 1983, has been sampled at 18 times per year after 1986. IX01 crosses the currents between Australia and Indonesia, including the Indonesian Throughflow and has been used in many studies of the Throughflow and the Indian Ocean Dipole. The FR sampling produces well resolved monthly time series of thermal structure along transects. Using IX01, Meyers et€al. (1995) shown the mean thermal structure generally westward flow in the deeper part of the thermocline and eastward shear in the shallow (<150€m) layer. Also, brought out the strongest variability in temperature is at the northern end of the transect near Indonesia. The temperature sections were used to understand the relationship of interannual variation in transport of Indonesian Throughflow to ENSO (Meyers 1996). Further, the time-variation of
3â•… In-Situ Ocean Observing System
65
temperature at the north end of IX01 clearly shows the strong, subsurface upwelling associated with the start of the IOD events of 1994 and 1997, before the start of surface cooling. These and the other FRX time series have been used to understand how subsurface thermal structure varies across the Indian Ocean during IOD events (e.g. Rao et€al. 2002; Feng and Meyers 2003). The use of FR lines in the Indonesian region to study the Indonesian Through-flow is discussed in the Indian Ocean white paper (Masumoto et€al. 2009). Low density transects have both operational and scientific objectives, such as investigate intraseasonal to interannual variability in the tropical oceans, measuring temporal variability of boundary currents, and investigating the historical relationship between sea surface height and upper ocean thermal structure. Many illustrative examples of applications of XBT observations, primarily from LD mode, are presented in the XBT white paper (Goni et€al. 2009).
3.2.4 Drifting Buoys For several years, oceanographers and meteorologists have deployed satellitetracked drifting buoys in support of their research and operational programmes. These two bodies of users have, however, sought different capabilities from their drifters: the oceanographers have mainly looked for designs which accurately follow water parcels at a given depth, whereas the meteorologists have equipped their drifters with air-pressure sensors to collect real-time observations for weather forecasting. Despite efforts by both user groups to develop combined programmes, these two main requirements have been largely incompatible, particularly in respect to the size and above-surface exposure of the drifter. The success of the low-cost WOCE Surface Velocity Programme (SVP) oceanographic drifter, with its accurately quantified water following characteristics and proven longevity, prompted renewed interest in the development of a low cost met-ocean drifter capable of satisfying the needs of both user communities. The result is the SVP Barometer (SVP-B) drifter, whose design and use is described in the DBCP Report (Sybrandy et€al. 2009). This design, refined over several years and after extensive testing, further develops the original SVP drifter by inclusion of a novel barometer port. This inexpensive but stable pressure sensor combined with a data filtering algorithm removes pressure spikes resulting from the repeated immersion of the drifter by waves. Drifting buoys normally measure sea surface temperature (SST) and air pressure, and by tracking their positions the surface currents (resultant current arising from Ekman and geostrophic) can be determined. Some drifters also have sensors to measure wind, temperature profile and salinity. The buoys are battery powered and typically last for one to two years. The buoys are disposable and can be deployed at sea by regular ship crews. Measurements are normally made hourly and the data are transmitted by satellite. Most drifters use the ARGOS satellite system for data transmission and positioning, although new systems such as Iridium are currently being evaluated as a pilot programme. At present, users can access web
66
M. Ravichandran
pages at both ISDM (http://www.meds-sdmm.dfo-mpo.gc.ca/isdm-gdsi/drib-bder/ index-eng.htm) and AOML (http://www.aoml.noaa.gov/phod/dac/gdp.html), where products and data are available. Integrated Science Data Management (ISDM) in Canada became a Responsible National Oceanographic Data Centre (RNODC) for Drifting Buoy Data on behalf of JCOMMOPS. The present status of global drifters as on November 09, 2009 is shown in Fig.€3.3. Drifting buoys, along with Voluntary Observing Ships, provide the primary source of air pressure data over the oceans that are needed to run global and regional weather forecasting models. The SST data provided by drifting buoys are important for climate data sets. The key application of surface drifter data is reduction of the bias error in satellite SST measurements, mapping large scale surface currents and identifying their role in heat transports and the generation of SST patterns and variability. They are invaluable as an independent validation tools for model and satellite derived currents (ekmanâ•›+â•›geostrophic), synthesized with altimetry and satellite winds to estimate absolute sea surface height (Niiler et€al. 2003; Rio and Hernandez 2003) and used to understand the role of surface transport in the genesis of the El Niño (Picaut et€al. 2002; Lagerloef et€al. 2003; McPhaden 2004). Maximenko et€al. (2008) compared the drifter-sensemble averaged velocities and a sum of time averaged geostrophic and Ekman currents, and concluded that one drifter per 5â•›×â•›5° grid is not adequate to capture/resolve most of the surface current features. Dohan et€al. (2009) describes the data quality from drifters and the principal scientific insights during the last decade. There are numerous direct uses of sea surface velocity, such as for navigation and drift trajectories, advection calculations of ocean properties, spills, and Synthetic Aperture Radar (SAR) operations, etc. Also, drifting buoy data are used for many applications such as to study physical characteristics and climatology of sea ice within the Antarctic sea ice zone. These data are also used for many applications such as to trace the seasonal pathways of freshwater plumes (Sengupta et€al. 2006), improving the surface current climatology (Shenoi et€al. 1999), etc.
3.2.5 Acoustic Tomography The ocean is largely transparent to sound, but opaque to electromagnetic radiation. Underwater sound is therefore a powerful tool for remote sensing of the ocean interior. This technique is used in Ocean Acoustic Tomography. It is used to measure temperatures and currents over large regions of the ocean (Munk et€al. 1995). On ocean basin scales, this technique is also known as acoustic thermometry. The technique relies on precisely measuring the time it takes sound signals to travel between two instruments, acoustic source and a receiver, by distance within the range of 100–5,000€km. If the locations of the instruments are known precisely, the measurement of time-of-flight can be used to infer the speed of sound, averaged over the acoustic path. Changes in the speed of sound are primarily caused by changes in the temperature of the ocean; hence the measurement of the travel times is equivalent to a measurement of temperature. A 1°C change in temperature corresponds to about
Fig. 3.3↜渀 Present status of global drifters and Moorings as on August 2010 (closed circle-drifting buoys; square-moored buoys). (Source: JCOMMOPS)
3â•… In-Situ Ocean Observing System 67
68
M. Ravichandran
4€m/s change in sound speed. An oceanographic experiment employing tomography typically uses several source-receiver pairs in a moored array that measures an area of ocean. Sound is widely used for remote sensing of the ocean on small scales (e.g., acoustic Doppler current profilers), but acoustical measurements have been underexploited in regional and global ocean observations relative to in-situ instruments and electromagnetic radiation (Dushaw et€al. 2009). This technique integrates temperature variations over a large region hence the smaller scale turbulent and internal-wave features that usually dominate point measurements are averaged out and we can better determine the large-scale dynamics. For example, measurements by thermometers (i.e., moored or Argo floats) have to contend with this 1–2°C noise, so that large numbers of instruments are required to obtain an accurate measure of average temperature. For measuring the average temperature of ocean basins, therefore, the acoustic measurement is quite cost effective. Tomographic measurements also average variability over depth as well, since the ray paths cycle throughout the water column. Basinwide and regional tomography were accepted as part of the ocean observing system by OceanObs’99 (Koblinsky and Smith 2001; Dushaw et€ al. 2001). Since then, a decade of measurements of basinscale temperature using acoustic thermometry have been completed in the North Pacific Ocean. In this project acoustic sources located off central California (1996–1999) and north of Kauai (1996–1999, 2002–2006) transmitted to receivers distributed throughout the northeast and north central Pacific. The result shows that the interannual, seasonal, and shorter period variability was large; as compared to the long term decadal trends. Acoustic traveltime data have been used previously in simple data assimilation experiments, and they can now be compared to assimilation products from state-of-the-art models from the ECCO (Estimating the Circulation and Climate of the Ocean) Consortium. Not surprisingly, comparisons between measured travel times and those predicted using Ocean models, constrained by satellite altimeter and other data show significant similarities and differences. Measured acoustic travel times have uncertainties much less than the differences between two model implementations by the ECCO Consortium. The acoustic data ultimately need to be combined with upper-ocean data from Argo floats and, sea surface height data from satellite altimeters to detect changes in abyssal ocean temperature and to quantitatively determine the complementarity of the various data types (Dushaw 2003). Apart from this, the passive acoustics can be used for a variety of purposes such as: tracking, counting and studying the behavior of vocalizing marine mammals and fish; assessing and monitoring the ecological impacts of ocean warming and acidification on marine ecosystems and biodiversity; detecting nuclear tests; detecting and quantifying tsunamis; measuring rainfall (Riser et€al. 2008); measuring the properties of undersea earthquakes (e.g., de GrootHedlin 2005) and volcanoes; monitoring the sound produced by high latitude sea ice; monitoring anthropogenic activities in marine protected areas and also in commercial use. The acoustic measurements supporting these projects can be real time and provide information about local ambient noise sources such as shipping, wind, rain, as well as noise from offshore wind farms.
3â•… In-Situ Ocean Observing System
69
3.2.6 Repeat Hydrography and Carbon Inventory Despite numerous technological advances over the last several decades, ship-based hydrography using research vessel remains the only method for obtaining highquality, high spatial and vertical resolution measurements of a suite of physical, chemical, and biological parameters over the full water column (Hood et€al. 2009). It is worth mentioning here that VOS and SOOP collect data while cruising, whereas research vessel stops at different locations and collect surface and subsurface data upto full vertical resolution. Ship-based hydrography is essential for documenting ocean changes throughout the water column, especially for the deep ocean below 2€km (52% of global ocean volume). Hydrographic measurements are needed to (a) reduce uncertainties in global freshwater, heat, and sea-level budgets, (b) determine the distributions and controls of natural and anthropogenic carbon (both organic and inorganic), (c) determine ocean ventilation and circulation pathways and rates using chemical tracers, (d) determine the variability and controls in water mass properties and ventilation, (e) determine the significance of a wide range of biogeochemically and ecologically important properties in the ocean interior, and (f) augment the historical database of full water column observations necessary for the study of long-timescale changes. Shipboard hydrographic data provide the quality standard against which the data from floats and other autonomous platforms and XBTs are compared, to assess their accuracy and for detection and correction of systematic errors. The high cost of shipboard hydrography is balanced against its broad and unique capability to measure many parameters that cannot be measured by other means, and to measure those that can with highest accuracy. Cost factors limit the global hydrographic survey to less than 103 profiles per year from the ocean surface to the bottom, while Argo floats deliver 105 thousand temperature/salinity profiles per year in the upper 2€km. The recommended hydrographic sections for the sustained decadal survey is shown in Fig.€3.4. Due to the increasing amount of Argo profiling floats in the ocean and to their moving nature, Argo floats cannot be recovered and their sensors cannot be recalibrated at the end of their lifetime. The main problem concerns the conductivity sensor that may drift or show an offset due to biological fooling and other problems. To ensure the quality of data, salinity drift in the conductivity sensors are adjusted by comparison of Argo salinity to nearby high-quality salinity/temperature data (Wong and Owens 2009). In addition to salinity drift, systematic errors in float pressure measurements are also an ongoing concern (e.g. Willis et€al. 2008). For both of these issues, the process of identifying and correcting systematic errors is dependent on, and its effectiveness is limited by, the volume and spatial distribution of recent shipboard CTD profiles. The requirements have not yet been established for high quality reference CTD data needed to validate and correct Argo. Similarly, shipboard CTD data are used to assess systematic changes over time in temperature versus depth errors from XBTs, for example to estimate and adjust the instrument’s fall rate (Wijffels et€al. 2008). Other parameters are required to be collected since
70
M. Ravichandran
90°N 60°N 30°N EQ 30°S 60°S 90°S
90°E
180°W
90°W
0°
Fig. 3.4↜渀 Recommended hydrographic sections for the sustained decadal survey (↜solid lines) and high-frequency repeat lines (↜dashed lines)
future Argo floats are likely to carry sensors for dissolved oxygen, chlorophyll-A, particulate organic carbon, and possibly others. The CLIVAR and Carbon Hydrographic Data Office (CCHDO) is the repository and distribution center for global CTD, hydrographic, carbon, and tracer data of the highest quality. These data are a product of WOCE, CLIVAR, the International Ocean Carbon Coordination Project (IOCCP), and other oceanographic research programs—past, present and to come. Hydrographic data acquired by investigators are pooled, verified, assembled and disseminated to users in different formats. The CCHDO’s primary window to the research community is via its web site (http:// cchdo.ucsd.edu).
3.2.7 Moorings Moorings are capable of measuring some of the key variables needed to describe, understand and predict large-scale ocean dynamics and ocean–atmosphere interactions. Marine meteorological variables include those needed to characterize fluxes of momentum, heat and fresh water across the air–sea interface, namely, surface winds, SST, air temperature, relative humidity, downward short and long-wave radiation, barometric pressure and precipitation. Physical oceanographic variables include upper-ocean temperature, salinity and horizontal currents. From these basic variables, derived quantities, such as latent and sensible heat, net surface radiation, penetrative shortwave radiation, mixed-layer depth, ocean density, and dynamic height (the baroclinic component of sea level) can be computed. The array design focuses on these marine meteorological and physical oceanographic variables,
3â•… In-Situ Ocean Observing System
71
though not all moorings will measure all variables. The moorings can also support sensors to measure CO2 concentrations in air and sea water, nutrients, bio-optical properties and ocean acoustics (International Clivar Project Office 2006). The Global Tropical Moored Buoy Array (GTMBA) is a multi-national effort to provide meteorological and ocean observational data in real-time for climate research and forecasting (McPhaden et€ al. 2009a). The buoys are used to collect oceanographic and meteorological data for monitoring forecasting, and climate research, particularly for ENSO studies. The array consists of the Tropical Atmosphere Ocean/Triangle Trans-Ocean Buoy Network (TAO/TRITON) in the Pacific, the Prediction and Research Moored Array in the Tropical Atlantic (PIRATA), and the Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA) in the Indian Ocean. These observing systems were designed and implemented within the framework of GOOS and GCOS. The primary objectives are to study intraseasonal to decadal time scales including ENSO and the Pacific Decadal Oscillation (PDO) in the Pacific, the meridional gradient mode and equatorial warm events in the Atlantic, the IOD and the Madden-Julian Oscillation (MJO) in the Indian Ocean, the mean seasonal cycle, including the Asian, African, Australian, and American monsoons and trends in all three basins that may be related to global warming. However, these observations will complement other in-situ and satellite observational components of Global observing systems. The GTMBA is built primarily around the Autonomous Temperature Line Acquisition System (ATLAS) moorings of NOAA’s Pacific Marine Environmental Laboratory (PMEL) and TRITON moorings of Japan Agency for Marine-Earth Science and Technology (JAMSTEC). The Schematic diagram of the ATLAS moorings with the locations of different sensors fitted on the buoys and on the moorings is available in PMEL website. These moorings have special attributes that make them a valuable technology for tropical climate studies. In particular, (1) they can be instrumented to measure both upper ocean and surface meteorological variables involved in ocean-atmosphere interactions; (2) they provide time series measurements at fine temporal resolution (minutes to hours) to resolve high frequency oceanic and atmospheric fluctuations that would otherwise be aliased into the lower frequency climate signals of primary interest; (3) they can be deployed and maintained on a fixed grid of stations, so that measurements do not distort the variability in time and space. The data from surface moorings are transmitted to shore via ARGOS satellite in real-time, which ensures (a) use of these data for operational weather, Ocean, and climate forecasting and (b) retrieval of data even if a mooring is lost. The data are posted daily and made freely available on the NOAA/Pacific Marine Environmental Laboratory GTMBA web site (http://www. pmel.noaa.gov/tao/global/global.html) as well as several web sites maintained by partner institutions around the world. Service Argos inserts the data on the GTS several times a day. Details about different types of moorings used in the GTMBA, including subsurface ADCP and deep ocean moorings, can be found in McPhaden et€ al. (2009a). Mooring sensor specifications (accuracy, resolution, range), sensor calibration procedures, and data quality control for both real-time and delayed
72
M. Ravichandran *OREDO7URSLFDO0RRUHG%XR\$UUD\
1
7$2
75,721
1 6 6
6ROLG 2SHUDWLQJ 2SHQ 3ODQQHG
5$0$ (
(
6WDQGDUG0RRULQJ )OX[5HIHUHQFH6LWH )OX[DQG&2(QKDQFHG
:
3,5$7$ :
(
&2(QKDQFHG &2DQG%LR±&KHP(QKDQFHG
Fig. 3.5↜渀 The Global tropical moored buoy array in October 2009. (Source: McPhaden et€al. 2009a)
mode data streams are available from websites maintained by PMEL and JAMSTEC website. The present status of GTMBA array in the Global ocean is shown in Fig.€3.5. TAO/TRITON data have been used in over 600 refereed journal publications since its inception in 1985. TAO/TRITON has been the dominant source of upper ocean temperature data near the equator over the past 25 years. The data show that depth average temperature in the upper 300€m, an index for upper ocean heat content, leads Niño3.4 SST (area-average SST anomalies between 5°N–5°S and 170°–120°W) typically by 1–3 seasons. A build up of heat content at the end of this record, followed by rising Niño3.4 SSTs, indicates development of the current 2009 El Niño event. This relationship between upper ocean heat content and SST not only validates recharge oscillator theory, but also highlights the role of heat content as the primary source of predictability for ENSO. The simple relationship has motivated the inclusion of upper ocean heat content as a predictor in some statistical ENSO forecast models (e.g., Clarke and van Gorder 2003; McPhaden et€al. 2006), analogous to the assimilation of upper ocean temperature in dynamical ENSO forecast models (e.g., Latif et€al. 1998). PIRATA data have been very influential in identifying the causes of the observed SST variations in the tropical north Atlantic over the past 10 years (McPhaden 2008). Year-to-year swings in tropical north Atlantic SST appear to be principally related to wind-evaporationSST feedbacks (Chang et€al. 2001) with contributions from shortwave radiation and horizontal advection. RAMA, even in the initial stages of development, is providing valuable data for describing and understanding variability in the Indian Ocean. For example, a pronounced semiannual cycle in upper-ocean temperature, salinity, and zonal velocity is evident in the first three years of data from near-equatorial moorings at 90°E (Hase et€al. 2008). The semi-annual velocity variations are referred to as Wyrtki Jets and their zonal mass transports are largely governed by wind-forced linear dynamics (Nagura and McPhaden 2008). They are also strongly modulated on 30–50 day intraseasonal time scales related to the MJO (Masumoto et€al.
3â•… In-Situ Ocean Observing System
73
2005). Variations in meridional velocity on the equator in contrast are dominated by higher frequency 10–20 day period oscillations, which are evident not only in the upper 400€m but also at depths greater than 2,000€m (Murty et€al. 2006; Ogata et€ al. 2008). Sengupta et€ al. (2004) identified these oscillations as wind forced mixed Rossby-gravity waves. RAMA data indicate that subsurface temperature variations lead those at the surface by a season near the equator in eastern basin, suggesting that upper ocean thermal structure may be a source of predictability for the IOD as in the Pacific for ENSO (Horii et€al. 2008). Moored buoy data are routinely used in ocean state estimation, operational ocean analyses, operational atmospheric analyses and reanalyses. These data have also been used extensively for model validation, and for satellite validation of surface winds, SST, rainfall, and shortwave radiation. Further, in order to build and maintain a multidisciplinary global network for a broad range of research and operational applications, the new program “OceanSITES” is evolving (Send et€al. 2009). The OceanSITES program is the global network of open-ocean sustained time-series measurements, called ocean reference stations, being implemented by an international partnership of researchers. OceanSITES provides fixed-point time series of various physical, biogeochemical, and atmospheric variables at different locations around the globe, from the atmosphere and sea surface to the seafloor. OceanSITES moorings are an integral part of the Global Ocean Observing System. They complement satellite and other in-situ data by adding the dimensions of time and depth. All OceanSITES data are publicly available. More information about the project is available at http://www.oceansites.org.
3.2.8 Argo Profiling Floats The Argo “oceanographic radiosonde” is a revolutionary concept that enhances the real time capability for the measurement of temperature and salinity through the upper 2,000€m in the ice free global Ocean. The exclusion of the high latitudes was due to the inability of early floats to sample under sea-ice. However, technological advances in float design in recent years now give us this capability. Advancements have come through re-design of hardware (i.e. armoured ice floats with ice-hardened antennae), software (ice-avoidance algorithms and open-water test) and communications (Iridium), allowing the transmission of stored winter profiles. Following the geostrophic principles, along with reference level velocities of the ocean, it contributes to the global description of the variability of the upper ocean thermohaline structure and circulation on seasonal and inter-annual time scales. Under a unique, internationally coordinated effort, it has been established as a global array of about 3,000 floats at a spatial resolution of 3°â•›×â•›3° grids. The data from these floats have helped to study the state of the upper ocean and the patterns of ocean climate variability, including heat and freshwater storage and transport (Freeland et€al. 2009). The data are collected by Argo floats that spend most of their working life drifting with the currents at depth (they are stabilized at
74
M. Ravichandran
a constant level by being less compressible than sea water) of 1,000 or 2,000€m. At typically 10 day intervals, the floats pump fluid into an external bladder and rise to the surface (taking about 6€h) and measure a profile of temperature and salinity. On surfacing the data are downloaded to the satellites (ARGOS or Iridium) which also obtains a series of float positions. When this task is completed the bladder deflates, the float thus returns to its original density and returns to depth to drift until the (usually 10 day) cycle is repeated. Data from Argo floats are available to users through two streams—real time (with only gross errors corrected or flagged) and delayed-mode (where corrections to salinity values have been estimated by experts familiar with the particular geographical environment). At present the delayed mode data delivery system has yet to be fully implemented. The real time data are placed on the GTS that delivers (mostly meteorological) data to operational centers throughout world. They are also available through two linked Argo Global Data Centres (GDACs) in Brest, France (Coriolis) and Monterey, California (US GODAE server). The global distribution of floats reporting on the Argo system as on Sept 2010 is shown in Fig.€3.6. Argo floats bridge the complementary nature of the direct and remote observing systems, filling the large gaps that exist in the global sampling network, and providing essential information for sub surface ocean state estimation. The combination of Argo and satellite altimetry has enabled a new generation of applications. Global maps of sea level, on time scales of weeks to several years, will be interpreted with full knowledge of the upper ocean stratification. Global Ocean and climate models can be initialized, tested and constrained with a level of information hitherto not available. The drift estimates from such an array would in addition provide useful estimates of deep pressure fields (reference level). Altimeters, together with the sea level gauge network, provide accurate measurements of time-varying sea surface height (SSH) globally every 10 days. On seasonal and longer time-scales, SSH is dominated by changes in subsurface density. The cause of mean sea level change is mainly due to change in volume and shape of the ocean basins at comparatively long time scales. The change in volume is caused by the changes in sea water density (steric) and mass (eustatic). The change in temperature (thermosteric) and salinity (halosteric) of the water column can change sea water density, whereas melting of glaciers in land and Artic and Greenland ice will change the mass of the water in the Ocean. The shape of the ocean basin changes due to vertical land movement, which is associated with local tectonic activity and post glacial rebound of land. The contribution of the steric and eusatic for the total sea level rise can be quantified using Argo profiling floats and GRACE respectively and which can indirectly compared to altimeter sea level data. On global scales, Argo and Jason, together with satellite gravity measurements, partition global sea level rise into its steric and mass-related components (Willis et€al. 2008; Cazenave et€al. 2009; Leuliette and Miller 2009; Wunsch et€al. 2007). Applications of Argo data are numerous and varied, including initialization of ENSO forecast models, initialization of short-range ocean forecasts, routine production of high-quality global ocean analyses, and studies of predictability on interannual and decadal time scales. A substantial improvement in seasonal forecast skill
Fig. 3.6↜渀 The global distribution Argo profiling floats locations reporting on the Argo data system as on December 2010. (Source: JCOMMOPS)
3â•… In-Situ Ocean Observing System 75
76
M. Ravichandran
due to Argo profile data has been demonstrated (Balmaseda and Anderson 2009), even during the period prior to full deployment of the Argo array. The combination of Argo (provides spatial coverage) and moorings provides the high temporal resolution needed for equatorial wave propagation and intra-seasonal variability and also for observing tropical variability at greater depth (Matthews et€al. 2007), and beyond the equatorial band and in all oceans (Cai et€al. 2005). Data from Argo and RAMA were used to illustrate air-sea interaction contributing to the growth of the devastating 2008 tropical cyclone Nargis (McPhaden et€al. 2009c). Heat and fresh water are fundamental elements of climate, and climate variability can be quantified by tracking heat and fresh water as they are transported and stored, and exchanged between, the atmosphere, oceans, land, and cryosphere. The temperature and salinity profile measurements over the global ocean provides the estimates of both the storage and large-scale transport of heat and freshwater (Freeland et€al. 2009 and reference therein). Although Argo is able to provide information on the ocean’s role in the planetary heat and water budgets, the important contributions of boundary currents (Send et€al. 2009) in ocean heat transport and of the abyssal oceans in heat storage are not yet adequately observed since the boundary currents, fronts, and eddies require finer resolution in the observing sampling rages. The most direct effect from the ocean comes from the surface effect i.e. sea surface temperature as well as sea level variability. Satellites provide global views of sea surface temperature and, in future, sea surface salinity. These data require in-situ measurements for calibration purposes, and for their interpretation. Argo can help satisfy both of these requirements. For example, Uday Bhaskar et€al. (2009), using satellite and in-situ data, have shown that Argo near surface temperature (5 or 10€m) can be used as SST in the Indian Ocean. Argo’s observation of surface layer structure globally, contributes to studies of atmosphere-ocean interactions (Freeland et€al. 2009 and references therein). Ocean salinity is an important component (indicator) of the “global water cycle” variability. It provides information on the exchange of freshwater with the atmosphere (e.g., evaporation, precipitation) and with the terrestrial and cryospheric components of the global climate system, and on storage within the ocean. Ocean salinity is a fundamental ocean state variable and a tracer of ocean circulation—an important dynamical ocean process that governs the uptake and redistribution of ocean heat and carbon, which are critical elements of the global climate system. Thus to understand and predict the global water cycle in the context of global climate change it be only be fully realized with the understanding of the marine branch of the hydrological cycle. Also ocean salinity changes have a direct impact on the exchange of CO2 between ocean and atmosphere and may affect marine species and ecosystems. Current knowledge of ocean salinity variability is hampered by a lack of enough long-term salinity records. Available observations indicate that remarkable changes of ocean salinity are underway in some regions. Unfortunately, it is unclear if these changes are attributable to natural variations, what processes may be involved, how they may or may not be consistent with changes in other components (e.g., precipitation) of the global water cycle, how long such changes have been underway, or how widespread they might be. The Argo float observation network is a critical component of a global salinity observing system.
3â•… In-Situ Ocean Observing System
77
3.2.9 HF Radar Real-time surface current information is a valuable supplement to understanding coastal air-sea interaction and dynamical processes at the coastal scales. Coastal surface current information may be correlated to winds and tidal currents among other physical phenomena. High-frequency (HF) radars have been used for measuring surface current fields and ocean-wave spectra. The physics behind HF radar is based on backscattering from a moving rough sea surface. The Radar transmits electromagnetic waves of 6–30€MHz (50–10€m wavelength), which travel along the sea surface beyond the horizon by ground wave propagation and are scattered back from ocean waves of half the electromagnetic wavelength (Bragg scattering). The scattered signals are measures of the Doppler spectrum caused by moving waves and speed of the surface currents carrying the ocean waves. Guided propagation along the conductive sea surface (ground wave) allows measurements beyond the horizon. It can also be inferred Ocean wave height and the wave directional spectrum using second-order sea echos of the Doppler spectrum. The Doppler shift of the backscattered signal is used for measuring the radial current speed relative to the radar site. If the two radar sites measure the radial velocity of a patch of water from two different angles, it is possible to calculate the two horizontal components of the surface velocity. The surface current measured is a horizontal mean over several km in both range and azimuth, over approximately the upper 0.5–1.0€m of the ocean (penetration depth of scattering ocean waves), and over some 10€min measuring time. These radar sites provide coastal-ocean surface current and wave information offshore out to 300€km. More detailed descriptions of the theory of HF radar can be found in numerous articles (e.g., Gurgel et€al. 1999; Barrick et€al. 1985). As part of the integrated ocean observing system (IOOS), the US has installed a number of HF radars on the west and east coasts of US. Prototype real-time data architecture, initially developed through funding from the National Science Foundation (NSF), is now being integrated by the Coastal Observing Research and Development Center (CORDC) at the Scripps Institution of Oceanography with existing HF radar data networks through a joint development program administered and managed by the National Data Buoy Center (NDBC) and the National Ocean Service (NOS), with oversight provided by the National Oceanic and Atmospheric Administration’s (NOAA) IOOS program office (Terrill et€al. 2006). An excellent online reference containing an introduction to the principles of HF Radar can be found on the Rutgers University Coastal Ocean Observation Lab (RUCOOL, http:// marine.rutgers.edu/cool). The coastal radar locations on the east/west coast of the US and daily average values of surface currents (6€km) derived from HF Radar along the coast of the US are available at http://cordc.ucsd.edu/projects/mapping/maps/. The validation of both wave with moorings and current observations with surface drifters are explained in detail by Kohut et al. (2008). Surface current observations using HF Radar and its assimilation into the NewYork Harbour observing and prediction system has been reported by Gopalakrishnan (2008). There are many coastal ocean radars that have been installed all around the coastal stations. The data
78
M. Ravichandran
provided by the coastal ocean Radar is already useful for many operational applications and Research use (http://www.codar.com/bib_05-present.htm).
3.2.10 Gliders Gliders are small autonomous underwater vehicles which were developed to carry out in-situ observations of the upper 1€km of the ocean. They enhance the capabilities of profiling floats by providing some level of maneuverability and hence position control. They perform saw-tooth trajectories from the surface to depths of 1,000€m, along reprogrammable routes (using two-way satellite link). There is around ~2–6€km between surfacing when diving to 1€km depth. They achieve vertical speeds of 10–20€cm/s and forward speeds of 20–40€cm/s and can be operated for a few months before they have to be recovered (Davis et€al. 2002). They can record temperature, salinity, pressure data and depending on the model some biogeochemical data, such as dissolved oxygen and fluorescence/optical backscattering at various angles/wavelengths (Chl-a, CDOM, phycoerythrin, turbidity, etc.). They can also be equipped with acoustic modems and hydrophones for underwater positioning and underwater data telemetry. Gliders can “fly” underwater along slightly inclined paths without propeller. A change in volume (generated by filling an external oil bladder) creates positive and negative buoyancy. Because of the fixed wings, the buoyancy force results in forward velocity as well as vertical motion. So gliders move on a sawtooth pattern, gliding downward when denser than surrounding water and upward when buoyant. Pitch and roll can be controlled by modifying the internal mass distribution and gliders automatically align the positions of the center of buoyancy and the center of gravity to achieve desired angle of ascent/descent. Either a rudder or a roll control is used for navigation through lists of waypoints. The high efficiency of the propulsion system enables gliders to be operated for several months during which they may cover thousands of kilometers. Davis et€ al. (2008) have operated gliders over many years in the eastern pacific to perform repeat sections. Similar long sections along the coasts of the USA in the Pacific, and in the Atlantic (Castelao et€al. 2008; Glenn et€al. 2008; Perry et€al. 2008) demonstrated the capacity of gliders to carry out, over years, measurements of the local vertical structure of the ocean over 0–200 or 0–1,000€m from the near-shore environment (10–100€m depth) to the open sea (hundreds of kilometer offshore). Other important aspects of gliders are (1) the longest glider section ever done with one set of batteries is 6,000€km long (Eriksen and Rhines 2008) and (2) crossing very high currents is possible (such as the Gulf Stream, Nevala 2005). The€Australian National facility of ocean Gliders (ANFOG) under IMOS uses gliders to observe the boundary currents and shelf processes around Australia. Glider technology is advancing quickly, and will be ideal for monitoring water masses and currents in a variety of oceanic regimes. In regions of divergence zones and the boundary currents near the continental slope or steep topographic features, gliders contribute immensely to measuring sub surface parameters. This will also
3â•… In-Situ Ocean Observing System
79
facilitate in understanding meso and sub-meso scale processes. Presently, the assimilation of glider data is already operational for T-S, in regional and global models (Testor et€al. 2009). The real time data are being archived at the Coriolis Data Center, Brest, France.
3.3â•…Basin Scale Observing System—IndOOS Of the three major oceans—Pacific, Atlantic, and Indian—the Indian Ocean has is the only one that is not open to the northern subtropical regions. This is a consequence of the presence of the Asian landmass restricting the Indian Ocean to south of about 25°N and hence it cannot transport heat gained in the tropics to the higher northern latitudes, as the Pacific and Atlantic oceans do, mainly via their western boundary currents. Furthermore, the Indian Ocean is the only ocean with a low-latitude opening in its eastern boundary and gains additional heat from the tropical Pacific via the Indonesian Throughflow. The unique geography has important implications for the oceanic circulation physics, and consequently for climate and the biogeochemistry of the ocean, giving the Indian Ocean many unique features. Heat is carried southward along the western coast of Australia toward the southern subtropics. The Indian Ocean consequently has a unique system of three-dimensional currents and interactions with the atmosphere that redistribute heat to keep the ocean approximately in a long-term thermal equilibrium (International Clivar Project Office 2006). Further, the strong influence of monsoon systems generates distinct seasonal variations in the upper ocean. Also, previous attempts to measure and simulate the ocean variability reveals rich spectrum of variability spanning from intraseasonal to interannual, decadal, and much longer time-scale phenomena. Combination and interaction among these phenomena cause significant climate variability over and around the Indian Ocean. Despite such an important role of the Indian Ocean such as monsoons, climate variability and its impact on global climate change through atmospheric and oceanic teleconnections, a long-term, sustained observing system in the Indian Ocean had not been started. This had left the Indian Ocean as the least observed ocean among the three major basins. Recognizing this observation-gap, an enthusiastic spirit emerged after the OceanObs’99 meeting, resulting in the development of a plan for the Indian Ocean Observing System (IndOOS) under the coordination of the CLIVAR/GOOS Indian Ocean Panel (Meyers and Boscolo 2006). The schematic diagram of IndOOS and the regional Observing system is shown in Fig.€3.7. The outstanding research issues that need to be addressed with observations to advance the understanding of the role of the Indian Ocean in the climate system and its predictability are (1) Seasonal monsoon variability and the Indian Ocean, (2) Intraseasonal variability, (3) Indian Ocean zonal dipole mode and El Niño–Southern Oscillation, (4) Decadal variation and warming trends in the upper Indian Ocean, and (5) Southern Indian Ocean and climate variability, (6) Circulation and the Indian Ocean heat budget (Indonesian Throughflow, shallow and deep overturning cells), (7) Biogeochemical cycling in the Indian Ocean and (8) Operational oceanography. The status of each element of IndOOS is briefed below.
80
M. Ravichandran Indian Ocean Observing System (IndOOS)
30N
ASEA
20N
BOB Satellite
10N MISMO
Eq. CORDIO
InaGOOS CIRENE
INSTANT
10S LOCO
20S
ASCLME
30S 30E
IMOS
50E
RAMA
ARGO float array
PS
Process Studies
70E XBT / XCTD lines
90E
110E
130E
Surface drifting buoy array
Real-time and near real-time tide gauge network (including the tsunami buoy network) ROOS
Regional Ocean Observing Systems
Fig. 3.7↜渀 Schematic chart of IndOOS. Fixed location in-situ observations of IndOOS are indicated in detail, the argo and surface drifters scatter widely within the Indian Ocean, and the satellite measurements cover surface observation of the whole area
3.3.1 Moorings The basin-scale mooring array is essential for understanding and identifying their limits of predictability of the role of the ocean in the Monsoon Intraseasonal oscillation (MISO) and Madden-Julian Oscillation (MJO), which are long lasting weather patterns that evolve in a systematic way for periods of four to eight weeks. The intense, long-lasting weather conditions associated with MISO and MJO interact strongly with the temperature and salinity structure of the ocean mixed layer, but the physics is not yet understood nor is it fully built into coupled models. The role of surface currents in the evolution of intraseasonal variation is not known. The air–sea heat and freshwater fluxes are poorly known. The array will provide vital
3â•… In-Situ Ocean Observing System
81
information on these processes. It is also needed to understand the mixed-layer dynamics and the role of currents in interannual variations, such as the IOD. Operational ocean-state estimation, such as the production of daily maps of currents and thermal structure for marine industry and defence, is not possible without the array. While this report is primarily concerned with oceanographic measurements, the meteorological measurements (particularly at moorings) will be extremely valuable to data assimilation issues concerned with weather forecasting and reanalysis efforts. The sub surface mooring array, called the Research moored Array for AfricanAsian-Australian Monsoon Analysis and prediction (RAMA) (McPhaden et€ al. 2009b) consists of a total of 46 moorings, of which 38 are ATLAS/TRITON-type surface moorings. Seven of these surface moorings are selected as surface flux reference sites, with enhanced flux measurements. The surface mooring system can measure temperature and salinity profiles from the surface down to 500€m depth as well as the surface meteorological variables, and the observed data is transmitted in real-time via Argos satellites. In addition to these surface buoys, there are five subsurface ADCP moorings along the equator to observe current profiles in the upper equatorial ocean, and three deep current-meter moorings with ADCPs in the central and eastern equatorial regions. The RAMA array design was evaluated and supported by observing system simulation experiments (Oke and Schiller 2007; Vecchi and Harrison 2007). The array has been implemented rapidly in recent years, largely through bi-national activities involving Japan, India, USA, Indonesia, China, France, Holland and South Africa. Early observations of this mooring provide invaluable data set for analyses on the Indian Ocean variability (Masumoto et€al. 2009 and references therein). For example, a long-term current observation at 90°E on the equator reveals that there is significantly large amplitude intraseasonal variability both in the zonal and meridional components as well as the well-known semiannual and annual variations. Also, RAMA mooring data used to capture subsurface evolution of the three consecutive Indian Ocean Dipole event from 2006 to 2008, with clear negative temperature anomaly at the thermocline depth that appeared a few months before the surface signatures of the IOD events. Mooring data were also used to observe the oceanic response to cyclone “Nargis”, which made landfall in Myanmar on 2 May 2008. Intense ocean mixing and significant turbulent heat loss from the ocean surface (~600°W/m2) occurred as Nargis passed near RAMA buoy at 15°N, 90°E in the Bay of Bengal (McPhaden et€al. 2009c). Surface moorings from the RAMA array also allowed process studies of the strong upper ocean response to the MJO in the Seychells-Chagos Thermocline Ridge region (Vialard et€al. 2008).
3.3.2 Argo Profiling Floats Argo floats are another revolutionary change in in-situ ocean observing in the Indian Ocean. The build-up began in 2003 as part of the global description of the variability of the upper ocean thermohaline structure and circulation on seasonal and
82
M. Ravichandran
inter-annual time scales. Data from these floats, together with the satellite basedand other in-situ observations, would enhance the understanding of the ocean circulation pattern and its influence on the global climate variability and would contribute to improve prediction skills of seasonal climate variability. The Indian Ocean (north of 40°S) requires 450 floats to meet the Argo design of one float per 3°â•›×â•›3° grid. Around 441 floats are active as on October 31, 2009. Still there are some gaps and other places where more than required floats are present. The Argo program’s unprecedented spatial and temporal coverage of density and geostrophic current is opening new perspectives on circulation-research. The new observations combined with a hierarchy of models are likely to address many unanswered questions. Argo observations in the Indian Ocean are creating many new insights by many different authors studying many aspects of the Indian Ocean. Thus Argo enables a new understanding of the upper ocean variability of Arabian sea, such as summer cooling during contrasting monsoons, temporal variability of the core-depth of Arabian Sea High Salinity Water mass (ASHSW), buoyancy flux variations and their role in air sea interaction, identification of the low-salinity plume off the Gulf of Khambhat, India, during post-monsoon period, mixed layer variability of western Arabian Sea, seasonal variability of the observed barrier layer, the importance of upper ocean temperature and salinity during cyclones, to reveal a pronounced westward propagation of subsurface warming in the southern tropical Indian Ocean associated with Rossby waves on the sloping thermocline, intense cooling of the sea surface at intraseasonal time scales in the southern tropical Indian Ocean during austral summer, etc. Also these data are used to study the the impact of assimilation in simulating temperature and salinity in the Indian Ocean (Masumoto et€al. 2009 and references therein).
3.3.3 SOOP/XBT Lines Several SOOP XBT lines obtain frequently repeated and high-density section data. The frequently repeated lines in the Indian Ocean are narrow shipping routes allowing nearly exact repeat sections. At least 18 sections per year are recommended in order to avoid aliasing the strong intraseasonal variability in this region. The CLIVAR/GOOS Indian Ocean Panel reviewed XBT sampling in the Indian Ocean and prioritized the lines according to the oceanographic features that they monitor (International CLIVAR Project Office 2006). The highest priority was on lines IX1 and IX8. The IOP recommended weekly sampling on IX1 because of the importance of throughflow in the climate system. IX8 monitors flow into the western boundary region, as well as the Seychelles-Chagos Thermocline Ridge, a region of intense ocean-atmosphere interaction at inter-annual time scales. IX8 has proven to be logistically difficult to implement, so an alternate line may be needed. More than 50 papers have been published based wholly or in part on the frequently repeated XBT lines in the Indian Ocean. The research results include to understand the seasonal, interannual and decadal variation of volume transport of
3â•… In-Situ Ocean Observing System
83
major open ocean currents, characterization of seasonal and interannual variation of thermal structure and its relationship to climate and weather (e.g. the IOD, tropical cyclones), surface layer heat budget to identify the relationship between sea surface temperature, depth of the thermocline and ocean circulation at interannual to decadal timescales, Rossby and Kelvin wave propagation and validation of variation of thermal structure and currents in models (Masumoto et€al. 2009 and references therein).
3.3.4 Drifting Buoys The surface drifting buoys in the Indian Ocean to meet the design of one buoy in every 5-degree box. As of November 2009, 62 drifters are active in the Indain Ocean north of 40°S and only ten drifters are in the North Indian Ocean. A problem in the Indian Ocean is that the strong Asian summer monsoon winds drive drifters out of the North Indian Ocean. Considering that the drifters are following the flow, the possible option for keeping the required number of drifters are (1) the criteria should be readdressed, (or) (2) a different measuring platform need to be implemented in these regions or (3) a more frequent seeding program is needed to maintain the 5-degree sampling. The design sampling density is to support calibration of satellite SST and to build surface current climatology. To our knowledge, the sampling density required to map surface currents at say monthly time scale has not been determined, but should be to validate surface currents in models and reanalyses.
3.3.5 Data Management The data portal for the Indian Ocean data collected in support of IndOOS is available at http://www.incois.gov.in/Incois/iogoos/home_indoos.jsp, which relies on a distributed network of data archives. The main idea is to provide a one-stop shop for Indian Ocean-related data and data products. The core of the system is a web portal maintained at INCOIS, providing direct binary access to the data via OPeNDAP and ftp protocols. Web-based browsing and data discovery are handled through custom-designed web tools and currently available on servers such as the Live Access Server (LAS). The distributed data archives are maintained by the individual groups at their institutes and made available to the community via the web portal. The portal contains data from basin-scale observations using mooring arrays, Argo profiling floats, expendable bathythermographs (XBT), surfacedrifters and tide gauges, as well as the data from regional/coastal observation arrays (ROOS) to observe boundary currents off Africa (WBC), in the Arabian Sea (ASEA) and Bay of Bengal (BOB), the Indonesian throughflow (ITF), off Australia (EBC) and deep equatorial currents. Satellite derived gridded data sets such as sea surface temperature (TMI), sea surface winds (QuikSCAT) and sea surface height anomaly (merged altimeter products) are also available. The agencies con-
84
M. Ravichandran
tributing to the IndOOS are committed to follow the CLIVAR data policy (http:// www.clivar.org/data/data_policy.htm). IndOOS provides the backbone for a number of planned process studies associated with international programs such as Vasco-Cirene, MISMO (Mirai Indian Ocean Cruise for the Study of the MJO-Convection Onset), TRIO (Thermocline Ridge of the Indian Ocean), CINDY2011 (Cooperative Indian Ocean experiment on intraseasonal variability in the Year 2011), DYNAMO (Dynamics of the MaddenJulian Oscillation), and the Year of Tropical Convection. IndOOS also supports the various regional observing systems around the Indian Ocean, which further link IndOOS. Incorporation of observations for bio-geochemical parameters will be a necessary future step forward to enhance interdisciplinary research in the Indian Ocean sector. A new satellite that measures surface salinity distribution would be another significant challenge in the Indian Ocean, where the large salinity contrast between the Arabian Sea and the Bay of Bengal play an important role in the climate system of the surrounding regions.
3.4â•…Summary Analysis and interpretation of the ocean observational data require the knowledge of different time and space scales over which each processes characterizes: from the turbulent eddies with durations of few seconds and spatial scales of centimeters (high frequency) to wind-forced and thermodynamically driver ocean currents with time scales of days to centuries and spatial scales of tens to thousands of kilometers (low frequency). The exchange of momentum, heat, salt and other tracers within the ocean and across the air-sea interface occurs in these space and time scales. The observing system should resolve timescales of seconds to decades by measuring continuously without much gap and the spatial scales as close as possible. These observations have to withstand and provide good quality of data during disturbed or extreme weather events. Also these observational data needs to be available to operational agencies in real-time. Research advances and paradigm shifts in oceanography have often been stimulated by observations of various processes; in particular, theories and models have typically been developed to explain, quantify, incorporate, or parameterize these processes using balance equations. Examples of this chronology include Ekman transport, western intensification of boundary currents, and seasonal phytoplankton blooms. Limitations of oceanographic data persist in terms of raw numbers and diversity of variables. This is not surprising considering the spatial scale of the oceanic setting and the time scales of interest, both of which can span over ten orders of magnitude and the complexity of the biology, chemistry, and physics of the oceans. Progress in addressing ocean sampling deficiencies using interdisciplinary, multiplatform sampling is best reviewed by Dickey (2003). Ocean models have become increasingly useful as new processes have been incorporated or parameterized in formulations, numerical techniques have been improved, and
3â•… In-Situ Ocean Observing System
85
more powerful computing capabilities have allowed increasing spatial and temporal resolution and range as well as greater numbers of variables and balance equations. Interestingly, as more data have been collected, analyzed, and interpreted, and as computer model simulations have become more realistic. Observationalists and modelers have become more cognizant of and dependent upon each other for making scientific advances. The culmination of this cultural change is epitomized by two methodologies: inverse methods and data assimilation (Dickey 2003). Though platforms represent the backbone of the observational component of any data assimilation system, integrated multiplatform approaches have been adopted by several major oceanographic process and time series studies in order to take advantage of the specific sampling capabilities of individual platforms, which generally can carry a variety of interdisciplinary sensors and systems and often have telemetry capabilities. The strengths and weaknesses of each platform are shown in Table€3.1. In order to maximize value of the observing system as a whole, it is critical for a set of core variables to be selected including some that are common to repeat hydrography and autonomous instruments—moorings, floats, and gliders. If deep ocean floats are developed and deployed in Argo, then validation and correction, requiring repeat hydrography, will be needed for those instruments as well as for the upper ocean ones. In the event that a deep float array is not deployed, then the observation of deep ocean changes in heat, freshwater, and steric sea level would rest entirely with the repeat hydrography program. Because there are gaps in the planned global sampling, of scale 5,000€km errors in estimation of global integrals would be substantial. Studies are needed to assess the likely errors in global ocean heat content with and without a deep float array (Roemmich et€ al. 2009; Hood et€al. 2009). The individual networks of the present Sustained Ocean Observing System for Climate including tropical moorings, XBTs, surface drifters, ship-based meteorology, tide gages, Argo floats, repeat hydrography, and satellite observations have developed largely independently of one another. Progress will now come from integration across the networks since the next big observational challenges—including boundary currents, ice zones, the deep ocean, biological impacts of climate, and the global cycles of heat, freshwater, and carbon—demand multiplatform approaches and because exploiting the value of ocean observations is intrinsically an activity of integration and synthesis (Roemmich et€ al. 2009). Also, discussed the synergies of the observing system on which improved integration will be built, and key infrastructures that will underpin an integrated global observing system, potential developments and improvements in the different in-situ platform networks. The time is now appropriate to consider integrating these observational programs, with a view to facilitating the effective direct application of the knowledge and predictive capacities that are and will continue to be gained from these studies (Masumoto et€al. 2009). The largest increments to be gained over the present global observing system will come from expanding the sampling domains of autonomous platforms, from addition of multi-disciplinary measurements, and from integrating developments in data quality, coverage, and delivery.
86
M. Ravichandran
Table 3.1↜渀 Strengths and weaknesses of different in-situ platforms Platforms Strengths Weaknesses Tide gauges −╇Long term measurement −╇Only one parameter −╇Simple technology −╇Along the coast −╇Easy to maintain −╇Tracks not always where VOS −╇Surface marine met parameters data required −╇High resolution along repeat tracks −╇Do not stop −╇Sampling at remote oceanic region −╇No sub surface SOOP −╇Temperature profile (760€m) and surface −╇Tracks not always where salinity data required −╇High resolution along repeat tracks enabling −╇Do not stop spatial-time series −╇Sampling at remote oceanic region −╇Deployment of autonomous sampling platforms Repeat hydrog- −╇Deploying sophisticated/heavy instruments −╇Inability to produce synraphy/ −╇Time series measurements of a many optic data sets research parameters (physical, chemical, biological, −╇Very sparse sampling, expensive vessel geological…) −╇Reach remote areas, high resolution along the repeat tracks Acoustic −╇Measuring and understanding the behavior −╇Temperature, heat content tomography of meso scale and large scale features assoand other variables are ciated with general circulation interpreted with technique −╇Space-time variability −╇Not measured directly −╇Biofouling Surface drifters −╇Horizontal spatial domain from meters −╇data storage volume for to the basin-scale telemetry measurements −╇Data from remote regions only at the surface −╇Global coverage −╇Avoid some regions −╇Rapid sampling in time −╇Limited variables −╇Low-cost −╇Robust technology Moorings −╇Inter disciplinary time series data to mea−╇Cannot provide horizontal sure changes in the ocean on time scales spatial information from minutes to years −╇Mixed temporal-spatial −╇Coastal and open ocean variability is measured and thus partitioning of −╇To sample at multiple depths local versus advective −╇Harsh environment effects requires comple−╇Real-time data availability mentary spatial data sets −╇Vandalism −╇Biofouling Floats −╇Horizontal spatial domain from meters to the basin-scale −╇Profiling frequency −╇Data from remote regions −╇Data storage volume for −╇Sub-surface information telemetry −╇Coarse X,Y,T resolution −╇Rapid sampling in time −╇Robust technology −╇Limited variables −╇Low cost and hence large numbers feasible −╇Avoid some regions
3â•… In-Situ Ocean Observing System Table 3.1╇ (continued) Platforms Strengths Gliders −╇Good sampling along tracks −╇Free choice of track −╇Can be steered −╇Different sensor suite feasible HF Radar −╇Good x,y,t resolution near the coast −╇Land based
87
Weaknesses −╇Very slow speed −╇Limited depth range and variables −╇Expensive −╇Limited variables and places −╇Limited coverage only surface currents and wave
Acknowledgements╇ Most of the work described above is based on the Community White Paper, OceanObs’09. Highest appreciation is placed on record for the excellent compilation by several authors and organization for their Community White Paper, it would have been difficult without these White papers. The encouragement and the facilities provided by the Director, INCOIS is acknowledged. Also acknowledged Wee Cheah, University of Tasmania, Sabastiaan Swart, University of Cape Town and unknown reviewer for critically going through the manuscript to improve it.
References Balmaseda M, Anderson D (2009) Impact of initialization strategies and observations on seasonal forecast skill. Geophys Res Lett 36, L01701. doi:10.1029/2008GL035561 Barrick DE, Lipa BJ, Crissman RD (1985) Mapping surface currents with CODAR. Sea Technol 26(10):43–47 Cai W, Hendon H, Meyers G (2005) Indian Ocean dipole-like variability in the CSIRO Mark 3 coupled climate model. J Climate 18:1449–1468 Castelao R, Glenn S, Schofield O, Chant R, Wilkin J, Kohut J (2008) Seasonal evolution of hydrographic fields in the central Middle Atlantic Bight from glider observations. Geophys Res Lett 35, L03617. doi:10.1029/2007GL032335 Cazenave A, Dominh K, Guinehut S (2009) Sea level budget over 2003–2008: a reevaluation from GRACE space gravimetry, satellite altimetry and Argo. Glob Planet Change 65(1–2):83–88 Chang P, Ji L, Saravanan R (2001) A hybrid coupled model study of tropical Atlantic variability. J Climate 14:361–390 Church JA, White NJ, Coleman R, Lambeck K, Mitrovica JX (2004) Estimates of the regional distribution of sea level rise over the 1950 to 2000 period. J Climate 17:2609–2625 Clarke AJ, Van Gorder S (2003) Improving El Niño prediction using a space-time integration of Indo-Pacific winds and equatorial Pacific upper ocean heat content. Geophys Res Lett 30(7):1399. doi:10.1029/2002GL016673 Clark C, Wilson S (2009) An overview of global observing systems relevant to GODAE. Oceanography 22(3):22–33 Davis R, Eriksen C, Jones C (2002) Autonomous buoyancy-driven underwater gliders. In: Griffiths G (ed) The technology and applications of autonomous underwater vehicles. Taylor and Francis, London Davis R, Ohman MD, Rudnick DL, Sherman J, Hodges B (2008) Glider surveillance of physics and biology in the southern California current system. Limnol Oceanogr 53(5, Part 2):2151–2168 de GrootHedlin CD (2005) Estimation of the rupture length and velocity of the Great Sumatra earthquake of December 26, 2004 using hydroacoustic signals. Geophys Res Lett 32, L11303. doi:10.1029/2005GL022695
88
M. Ravichandran
Dickey TD (2003) Emerging ocean observations for interdisciplinary data assimilation systems. J Marine Syst 40–41:5–48 Dohan K et€ al (2009) Measuring the global ocean surface circulation with satellite and in-situ observations. Community White Paper, Oceanobs’09 Dushaw BD et€al (2001) Observing the ocean in the 2000’s: a strategy for the role of acoustic tomography in ocean climate observation. In: Koblinsky CJ, Smith NR (eds) Observing the oceans in the 21st century. GODAE Project Office and Bureau of Meteorology, Melbourne, pp€391–418 Dushaw BD (2003) Acoustic thermometry in the North Pacific, CLIVAR Exchanges No. 26, March 2003. International CLIVAR Project Office, Southampton, UK Dushaw B et€ al (2009) A global ocean acoustic observing network. Community White Paper, OceanObs’09 Eriksen CC, Rhines PB (2008) Convective to gyre-scale dynamics: seaglider campaigns in the Labrador Sea 2003–2005. In: Dickson R, Meincke J, Rhines P (eds) Arctic-subarctic ocean fluxes: defining the role of the northern seas in climate. Springer, Dordrecht (Chapter€25) Feng M, Meyers G (2003) Interannual variability in the tropical Indian Ocean: a two-year timescale of Indian Ocean Dipole. Deep Sea Res Part II: Top Stud Oceanogr 50:2263–2284 Freeland H et€al (2009) Argo—a decade of progress. Community White Paper, OceanObs’09 Glenn S, Jones C, Twardowski M, Bowers L, Kerfoot J, Kohut J, Webb D, Schofield O (2008) Glider observations of sediment resuspension in a Middle Atlantic Bight fall transition storm. Limnol Oceanogr 53(5, Part 2):2180–2196 Goni G et€al (2009) The ship of opportunity program. Community White Paper, OceanObs’09 Gopalakrishnan G (2008) Surface current observations using high frequency radar and its assimilation into the New York harbor observing and prediction system. PhD Thesis, Stevens Institute of Technology, Castle Point on the Hudson, Hoboken, NJ 07030 Gulev SK, Jung T, Ruprecht E (2007) Estimation of the impact of sampling errors in the VOS observations on air-sea fluxes. Parts: I and II. J Climate 20:279–301, 302–315 Gurgel KW, Essen HH, Kingsley SP (1999) High-frequency radars: physical limitations and recent developments. Coast Eng 37(3–4):201–218 Hase H, Masumoto Y, Kuroda Y, Mizuno K (2008) Semiannual variability in temperature and salinity observed by Triangle Trans-Ocean Buoy Network (TRITON) buoys in the eastern tropical Indian Ocean. J Geophys Res 113, C01016. doi:10.1029/2006JC004026 Hood M et€al (2009) Ship-based repeat hydrography: a strategy for a sustained global program. Community White Paper, OceanObs’09 Horii T, Hase H, Ueki I, Masumoto Y (2008) Oceanic precondition and evolution of the 2006 Indian Ocean dipole. Geophys Res Lett 35, L03607. doi:10.1029/2007GL032464 IOC, Manual on Sea Level Measurement and Interpretation (2006) Volume IV: an update to 2006, JCOMM technical report No. 31, WMO/TD. No. 1339 International CLIVAR Project Office (2006) Understanding the role of the Indian Ocean in the climate system—implementation plan for sustained observations. ICPO Publication Series 100; GOOS report No. 152; WCRP informal report No. 5/2006, International CLIVAR Project Office, South Hampton, UK, p€60, 30 figures Kent EC, Berry DI (2008) Assessment of the Marine Observing System (ASMOS): final report, NOCS research and consultancy report No. 32, p€55 (available electronically from the authors) Kent E et€ al (2009) The Voluntary Observing Ship (VOS) scheme. Community White Paper, OceanObs’09 Koblinsky C, Smith N (eds) (2001) Ocean observations for the 21st century. GODAE Office/BoM, Melbourne Kohut J, Roarty H, Licthenwalner S, Glenn S, Barrick D, Lipa B, Allen A (2008) Surface current and wave validation of a nested regional HF radar Network in the Mid-Atlantic Bight, Current Measurement Technology (CMTC). Proceedings of the IEEE/OES 9th working conference on 17–19 March 2008, pp 203–207. doi:10.1109/CCM.2008.4480868 Lagerloef GSE, Lukas R, Bonjean F, Gunn JT, Mitchum GT, Bourassa M, Busalacchi AJ (2003) El Niño Tropical Pacific Ocean surface current and temperature evolution in 2002 and outlook for early 2003. Geophys Res Lett 30(10):1514. doi:10.1029/2003GL017096
3â•… In-Situ Ocean Observing System
89
Latif M, Anderson D, Barnett T, Cane M, Kleeman R, Leetmaa A, O’Brien J, Rosati A, Schneider E (1998) A review of the predictability and prediction of ENSO. J Geophys Res 103:14375– 14394 Leuliette EW, Miller L (2009) Closing the sea level rise budget with altimetry, argo, and GRACE. Geophys Res Lett 36, L04608 Masumoto Y, Hase H, Kuroda Y, Matsuura H, Takeuchi K (2005) Intraseasonal variability in the upper layer currents observed in the eastern equatorial Indian Ocean. Geophys Res Lett 32, L02607. doi:10.1029/2004GL021896 Masumoto Y et€ al (2009) Observing systems in the Indian Ocean. Community White Paper, OceanObs’09 Matthews A, Singhruck P, Heywood K (2007) Deep ocean impact of a Madden-Julian oscillation observed by argo floats. Science 318(5857):1765–1769 McPhaden MJ (2004) Evolution of the 2002/03 El Nino. Bull Am Meteorol Soc 85(5):677–695 McPhaden MJ (2008) Evolution of the 2006–07 El Niño: the role of intraseasonal to interannual time scale dynamics. Adv Geosci 14:219–230 McPhaden MJ, Zhang X, Hendon HH, Wheeler MC (2006) Large scale dynamics and MJO forcing of ENSO variability. Geophys Res Lett 33(16), L16702. doi:10.1029/2006GL026786 McPhaden MJ et€ al (2009a) The global tropical moored buoy array. Community White Paper, Oceanobs’09 McPhaden MJ et€al (2009b) RAMA: the research moored array for African-Asian-Australian monsoon analysis and prediction. Bull Am Meteorol Soc 90:459–480 McPhaden MJ, Foltz GR, Lee T, Murty VSN, Ravichandran M, Vecchi GA, Vialard J, Wiggert JD, Yu L (2009c) Ocean-atmosphere interactions during cyclone Nargis. EOS 90:53–54 Maximenko NA, Melnichenko OV, Niiler PP, Sasaki H (2008) Stationary mesoscale jet-like features in the ocean. Geophys Res Lett 35, L08603. doi:10.1029/2008GL033267 Merrifield M et€ al (2009) The global sea level observing system (GLOSS). Community White paper, Oceanobs’09 Meyers G (1996) Variation of Indonesian throughflow and the El Niño—southern oscillation. J Geophys Res 101:12255–12263 Meyers G, Bailey R, Worby T (1995) Volume transport of Indonesian throughflow. Deep Sea Res Part I Oceanogr Res Pap 42:1163–1174 Meyers G, Boscolo R (2006) The Indian Ocean Observing System (IndOOS). CLIVAR Exch 11(4):2–3, International CLIVAR Project Office, Southampton, UK Munk W, Worcester P, Wunsch C (1995) Ocean acoustic tomography. Cambridge University Press, Cambridge. ISBN 0-521-47095-1 Murty VSN, Sarma MSS, Suryanarayana A, Sengupta D, Unnikrishnan AS, Fernando V, Almeida A, Khalap S, Sardar A, Somasundar K, Ravichandran M (2006) Indian moorings: deep-sea current meter moorings in the eastern equatorial Indian Ocean. CLIVAR Exch 11(4):5–8, International CLIVAR Project Office, Southampton, UK Nagura M, McPhaden MJ (2008) The dynamics of zonal current variations in the central equatorial Indian Ocean. Geophys Res Lett 35, K23603. doi:10.1029/2008GL035961 Nevala A (2005) A glide across the Gulf Stream. WHOI Oceanus, March 2005 Niiler PP, Maximenko NA, McWilliams JC (2003) Dynamically balanced absolute sea level of the global ocean derived from near-surface velocity observations. Geophys Res Lett 30(22): 2164–2167 NSF (2001) Ocean sciences at the new millennium. National Science Foundation, Arington Ogata T, Sasaki H, Murty VSN, Sarma MSS, Masumoto Y (2008) Intraseasonal meridional current variability in the eastern equatorial Indian Ocean. J Geophys Res 113, C07037. doi:10.1029/2007JC004331 Oke PR, Schiller A (2007) A model-based assessment and design of a tropical Indian Ocean mooring array. J Climate 20:3269 Perry MJ, Sackman BS, Eriksen CC, Lee CM (2008) Seaglider observations of blooms and subsurface chlorophyll maxima off the Washington coast. Limnol Oceanogr 53(5, Part 2): 2169–2179
90
M. Ravichandran
Picaut J, Hackert E, Busalacchi AJ, Murtugudde R, Lagerloef GSE (2002) Mechanisms of the 1997–1998 El Nino-La Nina, as inferred from space-based observations. J Geophys Res 107(C5). doi:10.1029/2001JC000850 Testor P et€ al (2009) Gliders as a component of future observing systems. Community White Paper, OceanObs’09 Rao SA, Behera SK, Masumoto Y, Yamagata T (2002) Interannual variability in the subsurface tropical Indian Ocean with a special emphasis on the Indian Ocean dipole. Deep-Sea Res Part 2: Top Stud Oceanogr 49:1549–1572 Reynolds RW, Smith TM, Liu C, Chelton DB, Casey KS, Schlax MG (2005) Daily high-resolution-blended analyses for sea surface temperature. J Climate 20:5473–5496 Rio MH, Hernandez F (2003) High-frequency response of wind-driven currents measured by drifting buoys and altimetry over the world ocean. J Geophys Res 108(C8):3283–3301 Riser SC, Nystuen J, Rogers A (2008) Monsoon effects in the Bay of Bengal inferred from profiling float-based measurements of wind speed and rainfall. Limnol Oceanogr 53(5): 2080–2093 Roemmich D et€al (2009) Integrating the ocean observing system: mobile platforms. Community White Paper, OceanObs’09 Send U et€al (2009) A global boundary current circulation observing network. Community White Paper, OceanObs’09 Shenoi SSC, Saji PK, Almeida AM (1999) Near surface circulation and kinetic energy in the tropical Indian Ocean derived from lagrangian drifters. J Mar Res 57:885–907 Sengupta D, Bharath Raj GN, Shenoi SSC (2006) Surface freshwater from Bay of Bengal runoff and Indonesian throughflow in the tropical Indian Ocean. Geophys Res Lett 33, L22609. doi:10.1029/2006GL027573 1999 Sengupta D, Senan R, Murty VSN, Fernando V (2004) A biweekly mode in the equatorial Indian Ocean. J Geophys Res 109, C10003. doi:10.1029/2004JC002329 Sybrandy et€ al (2009) Global drifter programme: barometer drifter design and refrence. DBCP report No. 4, Revision 2.2. Data Buoy Cooperation Panel Terrill E, Otero M, Hazard L, Conlee D, Harlan J, Kohut J, Reuter P, Cook T, Harris T, Lindquist K (2006) Data management and real-time distribution for HF Radar national network. MTS/ IEEE Oceans 2006, Boston, Paper 060331-220 Trenberth K et€al (2009) Atmospheric reanalyses: a major resource for ocean product development and modeling. Community White Paper, OceanObs’09 Udaya Bhaskar TVS, Rahman SH, Pavan ID, Ravichandran M, Nayak S (2009) Comparison of AMSR-E and TMI sea surface temperature with Argo near-surface temperature over the Indian Ocean. Int J Remote Sens 30(10):2669–2684 Vecchi GA, Harrison MJ (2007) An observing system simulation experiment for the Indian Ocean. J Climate 20:3300–3319 Vialard J, Foltz G, McPhaden M, Duvel J-P, de Boyer Montégut C (2008) Strong Indian Ocean sea surface temperature signals associated with the Madden-Julian oscillation in late 2007 and early 2008. Geophys Res Lett 35, L19608. doi:10.1029/2008GL035238 Wijffels SE, Meyers G, Godfrey JS (2008) A twenty year average of the Indonesian throughflow: regional currents and the inter-basin exchange. J Phys Oceanogr 38(8):1–14 Willis J, Chambers D, Nerem R (2008) Assessing the globally-averaged sea level budget on seasonal to interannual time scales. J Geophys Res 113, C06015. doi:10.1029/2007JC004517 Wong APS, Owens WB (2009) An improved calibration method for the drift of the conductivity sensor on autonomous CTD profiling floats by Θ-S climatology. Deep Sea Res Part I: Oceanogr Res Pap 56:450–457. doi:10.1016/j.dsr.2008.09.008 Woodworth P, Player R (2003) The permanent service for mean sea level: an update to the 21st century. J Coastal Res 19(2):287–285 Wunsch C, Ponte RM, Heimbach P (2007) Decadal trends in sea level patterns: 1993–2004. J Clim 20:5889–5911
Chapter 4
Ocean Data Quality Control James A. Cummings
Abstract╇ Automated ocean data quality procedures are presented. The procedures are logically grouped into four stages of processing, which when taken together form a complete sensor-to-prediction quality control system. The main features of the different ocean observing systems assimilated by GODAE are presented along with sources and types of errors that can occur in the data. Specific quality control procedures are described that test for these errors as well as more general procedures that estimate the consistency of the data across observing systems. Performance of the external data checks in the U.S. Navy real-time ocean data quality system is described. Finally, the importance of real-time ocean data quality control as an observing system monitoring tool is emphasized, and some specific examples are given of new quality control techniques developed in numerical weather prediction that have direct applicability in ocean data assimilation and forecasting systems.
4.1â•…Introduction Observation data quality control is a fundamental requirement of GODAE ocean data assimilation systems. Using or accepting erroneous data can cause an invalid conclusion to be made or an incorrect analysis. Alternatively, rejecting extreme, but valid, data can miss the detection of important events. The goal of quality control, therefore, is to reduce or eliminate making the wrong decisions. Quality control must correctly identify observations that are obviously in error, as well as the more difficult process of identifying measurements that fall within valid and reasonable ranges, but nevertheless are erroneous. It is likely that decisions made at the quality control step affect the success or failure of the entire analysis/forecast system. Ocean data quality control is best performed in stages. The first stage consists of a series of preliminary data sensibility checks. Observations failing any one of J. A. Cummings () Oceanography Division, Naval Research Laboratory, Monterey, CA, USA e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_4, ©Â€Springer Science+Business Media B.V. 2011
91
92
J. A. Cummings
these tests are considered to have gross errors and are removed from further consideration. The second stage is based on a complex quality control procedure where observations are subjected to a series of tests. Observations are not rejected after immediately failing any one of the quality control tests; rather the final quality control decision is based on simultaneous consideration of results from all of the tests. This process uses a decision-making algorithm where the ultimate fate of the observation in the analysis/forecast system is decided (accept, reject, schedule for manual intervention). The outcome of the decision-making algorithm represents the likelihood an observation contains a random error. The third stage of ocean data quality control is performed by the analysis system itself. At this point, the gross and random error characteristics of the observations have been determined, and observations considered acceptable for the analysis have been selected. The third stage of the quality control is designed to protect the analysis from marginally acceptable data that have, for unknown reasons, passed the earlier stages of the quality control. A final fourth stage of the quality control is done after the analysis and the forecast have been completed as part of a system that performs routine assessment of the impact of assimilating observations on the reduction of model forecast error. Figure€ 4.1 illustrates these different stages of ocean data quality control and the flow of information through the fully automated, real-time system operated by the U.S. Navy. In this paper, various approaches, procedures, and algorithms used to quality control ocean observations are described. The emphasis is on real-time, fully automated ocean data quality control. It is beyond the scope of this paper to discuss the wide variety of delayed-mode or manual intervention quality control efforts that have been implemented (e.g., Boyer and Levitus 1994). The paper is organized as follows. Section€2 describes the real-time ocean observing systems, and Sect.€3 gives the stand alone, gross error quality control procedures that are applied to ocean observations. Section€4 provides descriptions of external quality control data checks and brief overviews of specific sources of error in ocean observing systems. Section€5 describes how the various independent external quality control data checks can be combined in a quality control decision-making algorithm and gives some performance results from the U.S. Navy real-time ocean data quality control system. Section€6 outlines the internal consistency checks that are used in the assimilation system itself, while Sect.€7 describes some possible quality control outcomes from the data impact system. Finally, Sect.€8 provides a summary and gives some conclusions on the interactions between ocean data quality control and observation monitoring.
4.2â•…Ocean Observing Systems A wide variety of observation data types are used in GODAE assimilation systems. The data include both in situ and remotely sensed measurements from space. As will be discussed, each observing system has its own unique data issues and
4â•… Ocean Data Quality Control
93
Stage 1: Preliminary data sensibility error checks
Ocean Obs SST: NOAA (GAC, LAC), METOP (GAC, LAC), GOES, MSG, AATSR, AMSR-E, Ship / Buoy Profile Temp/Salt: XBT, CTD, Argo Float, Fixed / Drifting Buoy
Stage 2: External data error checks
Ocean Data QC
Innovations Stage 3: Internal data error checks
Altimeter SSH: Jason-1, Joson-2, ENVISAT Sea Ice: SSM / l, SSMIS Glider: Slocum, Sea-Glider, Spray CTD Adaptive Sampling Data Impacts
Stage 4: Adjoint sensitivities
3DVAR
Forecast Fields + Prediction Errors
Analysis Components (QC + 3DVAR)
Increments
HYCOM First Guess
Forecast Component
Fig. 4.1↜渀 Chart showing flow of ocean observations through the different stages of ocean data quality control in the U.S. Navy global HYCOM system. Stage 1 sensibility error checks are performed on the raw data; stage 2 external data checks are performed in the fully automated ocean data QC module; stage 3 internal data checks are performed by the iterative solver in the variational assimilation; and stage 4 adjoint sensitivity calculations are done after the forecast before the next QC data cut (see text for details). Note feedback of the HYCOM forecast model fields and prediction errors into the ocean data QC for use as background fields in the next execution of the stage 2 external data error checks
quality control requirements. Sources of operational ocean data are described in this section. Most GODAE assimilation centers receive in situ ocean observations over the Global Telecommunication System (GTS). At the current time, data transmitted via the GTS are coded in specific data type formats which use, at most, two decimal places for measurements of temperature and salinity. Further, the existing formats do not allow for additional information about the data in the form of quality flags. However, observational data on the GTS are moving to a new binary format based on BUFR (Binary Universal Form for the Representation of data—a data format maintained by the World Meteorological Organization). In this format, data values can be transmitted with more precision than the existing text-based data formats, and local tables can be added to the message that contain value added or quality assurance information from the data provider. The move to BUFR on the GTS is a long process, scheduled to be completed for all ocean data types in 2016. In addition
94
J. A. Cummings
to the GTS, Argo float data are also available at two global data assembly centers (GDAC): one in the United States at the Naval Research Laboratory, Monterey, California; and the second in France at the Coriolis Data Center, Brest. There is no standard way satellite oceanographic observations are distributed to GODAE assimilation systems. In some cases, there is a dedicated push from the data provider to the center. In other cases, data are placed on dedicated servers where the observations are then pulled by the center. For example, the GODAE High Resolution SST pilot project has been instrumental in setting up data servers where satellite SST data providers transmit their SST retrievals in near-real-time in a common format (Donlon et€al. 2007). This effort has made the availability of SST data from a wide variety of satellite systems commonplace.
4.2.1 Ship and Buoy Sea Surface Temperature and Salinity Ship sea surface temperature (SST) observation data types are identified as engine room intake, hull contact sensor, or bucket temperatures based on type codes contained in the ship reports received over the GTS. Buoy SST data types are also received from the GTS and consist of fixed and drifting buoy reports. Observing systems that report both in situ SST and sea surface salinity (SSS) include thermo-salinograph observations from ships are sent over the GTS in TRAKOB reports.
4.2.2 Satellite Sea Surface Temperature There are many sources of satellite SST observations. Infrared satellite sensors include the NOAA and METOP Advanced Very High Resolution Radiometer (AVHRR) polar orbiter 4-km resolution global area coverage (GAC) and 1-km resolution local area coverage (LAC) data. Note that METOP LAC retrievals are global, while NOAA LAC retrievals are restricted to certain coastal areas, mainly in the northern hemisphere. The Geostationary Operational Environmental Satellite (GOES) infrared data have a resolution of 4-km and are available from the GOES-10, GOES-11, and GOES-12 satellites. The Meteosat Second Generation (MSG) is also a geostationary infrared satellite that provides 4-km resolution data centered over Europe. The Advanced Microwave Sensor Radiometer Earth (AMSR-E) on board the NASA Aqua satellite provides global coverage of 25km resolution microwave SST. The Advanced Along-Track Scanning Radiometer (AATSR) instrument on board the European ENVISAT satellite provides the first routine measurements of a true skin SST at 1-km resolution. Typically, radiance data in adjacent pixels are averaged into 2â•›×â•›2 (NOAA and METOP AVHRR) or 3â•›×â•›3 (GOES, MSG) bins before making the SST retrieval in order to reduce sat-
4â•… Ocean Data Quality Control
95
ellite sensor noise and produce a more accurate SST. This process necessarily reduces the resolution of the sensor data listed above to 2-km LAC, 8-km GAC, and 12-km geostationary retrievals. SST retrievals from the NOAA, METOP, and GOES satellites are available from the Naval Oceanographic Office (May and Osterman 1998; May et€al. 1998). SST retrievals from AATSR (Corlett et€al. 2006) and MSG (Merchant et€al. 2008, 2009) are available from Meteo-France and obtained from the GHRSST data server at the Jet Propulsion Laboratory, USA (Donlon et€al. 2007).
4.2.3 Sea Ice Concentration The Special Sensor Microwave Imager (SSM/I) and the Special Sensor Microwave Imager/Sounder (SSMIS) on board the Defense Meteorological Satellite Program (DMSP) series of satellites provide routine observations of sea ice concentration at approximately 25€km resolution. At the present time there are 3 SSM/I (F11, F13, F15) and 3 SSMIS (F16, F17, F18) satellites providing sea ice data.
4.2.4 Temperature and Salinity Profiles Profile observations are reported from both fixed and moving platforms. Most profile observations report only temperature, such as expendable bathythermographs (XBT) and some fixed buoys, but profiling floats (Argo), conductivity-temperature-depth (CTD) sensors, and an increasing number of fixed buoys report both temperature and salinity. Profile observations are also available from gliders that measure temperature and salinity at varying depths and positions along a dive. A glider dive consists of a descending and an ascending profile in which the latitude, longitude positions and times of the observations change with depth. The presence of temperature and salinity in some reports and the unique sampling characteristics of ocean gliders present new challenges to the quality control of ocean profile data. There are common approaches to the quality control of the various profile observing systems, but there are also unique instrumentation specific error checks that are performed.
4.2.5 Altimeter Sea Surface Height At the present time, sea surface height anomaly (SSHA) observations are available from satellite radar altimeters on board the Jason-1, Jason-2, and ENVISAT satellites. Historically, satellite altimeters have been flown on Topex/Poseidon, Geo-
96
J. A. Cummings
sat and Geosat Follow-on, and the European Remote Sensing series of satellites (ERS-1 and ERS-2).
4.2.6 Altimeter and Buoy Significant Wave Height Each satellite radar altimeter also provides significant wave height (SWH) and wind speed observations. SWH observations are also available from many fixed buoy locations, mainly in the northern hemisphere. SWH observations are assimilated into wave models.
4.3â•…Preliminary Data Sensibility Checks Several preliminary data sensibility error checks are performed prior to the quality control of the observed values. Observations failing any one of these preliminary data checks are considered to have gross errors and are discarded or flagged for rejection. In some cases the preliminary data checks are performed by the data provider and the observations are simply not distributed. The preliminary data checks and logic for accepting/rejecting observations at this stage of the quality control process are described below.
4.3.1 Land/Sea/Fresh Water Test All observation locations are checked against a global, high-resolution land/sea database. Observation locations surrounded by water in all directions are accepted, and observation locations surrounded by land in all directions are discarded. For observations very near the coast a fuzzy land/sea boundary check is used. Observation locations are accepted if, in any one direction, the otherwise over-land location is next to a water point. Relaxation of the land/sea discrimination allows for resolution errors of the land/sea database and precision errors in the reporting of observation latitude and longitude positions over the GTS. Fuzzy land/sea tests are useful when ships provide observations while parked at a dock, or when instruments are deployed on piers very close to the coast. Note that the land/sea test must also distinguish observation locations in fresh- and seawater locations. Lake surface temperatures are routinely provided in remotely sensed satellite data streams, and in situ fixed buoys are located in many large lake systems, such as the Great Lakes in the U.S. Remotely sensed and in situ lake surface temperatures have unique error characteristics and need to be distinguished as such in the quality control. Remotely sensed lake surface temperatures are routinely used in analyses of lower boundary conditions in Numerical Weather Predictions (NWP) systems.
4â•… Ocean Data Quality Control
97
4.3.2 Location/Speed Test A location (speed) test is used to determine if the reported position of an observation is consistent with prior positions from the same platform. The test is necessarily restricted to data types that report unique call signs, such as Argo floats, surface ships, aircraft, and fixed and drifting buoys. Observations failing the location/ speed test are typically scheduled for manual intervention. No automated method exists at the present time to correct erroneous reported positions. The speed test uses a sliding time window of the last 25 reported platform positions from a given platform. The algorithm is applied to sequential locations both forward and backward in time. Newly received observation locations may appear to be erroneous but in fact are correct and indicate an error in a past reported position. Forward and backward application of the speed test has been shown to be the best way to detect position errors in the past. The method takes into account differences in the expected movement of airborne versus surface ship platforms, as well as a test to ensure that the identification of a buoy is fixed or drifting based on the expected pattern of all numeric buoy call signs. Care is taken to ensure that the speed test does not inadvertently reject new drifting buoy observations when drifting buoy call signs are reused. The practice of reusing buoy call signs often results in very large changes from the last reported position of the failed buoy and the position of the new buoy with the same call sign. This problem is minimized by not checking locations from buoys with the same call sign that have reporting times which differ by more than 120€hours. Observations are rejected if the reported position differs by twice the expected rate of speed as computed from the recent time history of platform locations. Buoy drift direction is not taken into account in the speed test.
4.3.3 Valid Value Range Tests Valid value range tests are applied to observed variables as well as observation locations and sampling times. Reported values of temperature are required to be greater than −2.5°C and less than 42°C. Reported values of salinity are required to be greater than or equal to 0€PSU and less than 42€PSU. Geographic dependencies can be built into the temperature and salinity valid value range test to take into account unique oceanographic conditions in marginal seas, such as the Red Sea, Mediterranean Sea, Sulu Sea, and Black Sea. Observation latitudes must be between −90 and 90°, and observation longitudes must be between −180 and 180°. Current speed observations are required to be positive numbers and less than a maximum value of 2€ms−1. Observation sampling times must contain valid year, month, day, hour, minute, and second date-time information, and the combined date time group cannot formally lie in the future (observation time younger than the receipt time of the observation at the center).
98
J. A. Cummings
4.3.4 Duplicates Detection of duplicate reports in ocean observational data is a difficult and recurring problem. The same data message can be received multiple times over different networks at the operational centers. One powerful method of detecting duplicates is to use the cycle redundancy check (CRC). The CRC checksum is calculated as a function of the message contents. It processes each byte of a message file. Any change, no matter how small, will produce a different CRC number. Identical CRC numbers, therefore, will indicate an exact duplicate message and one of the messages can simply be deleted. The real problem with detecting duplicates, however, is the issue of near-duplicates; observations with the same location, time and exact match of data type, but otherwise are different. It may be that one version has lower precision, fewer reporting levels, or a different reporting source. It is not possible to reject near-duplicates at the preliminary data check level of quality control processing. Further, examination of the observed data values is usually needed to make an informed decision. Near-duplicates, therefore, are typically processed through the external data checks described in Sect.€4 as unique data reports. It is when updating observations in the quality controlled data base that the problem of near-duplicates needs to be resolved. It is not advisable to maintain near-duplicates in the data base because of the possibility of data inconsistency. Accordingly, a decision has to be made on which of the near-duplicate observations to keep and which near-duplicate to toss. The decision should be based on objective measures of data quality; retain the observation that has more reporting levels, has sampled deeper, or has received better quality control scores. An additional determination of near-duplicates is done when observations are read into the analysis. In this case, multiple observations can be closely spaced (within a model grid cell) and must be thinned horizontally to ensure that the covariance matrix is not ill conditioned. Decisions to retain or toss observations at this point in the assimilation can take into account additional information about the observation, such as data type and quality control outcomes from the external data checks described in the next section.
4.4â•…External Data Checks Effective quality control is a strong function of the amount of information available. A primary purpose of the quality control system is to gather validated information about a newly received observation in order to determine the consistency of the reported values with what is known about the observed variable. Knowledge of the uncertainty of the observation and the collocated information is also needed to formulate and test hypotheses in the quality control decision-making process. This information is acquired and combined in a series of external data checks that are performed prior to the analysis. Many of these pre-analysis quality control procedures
4â•… Ocean Data Quality Control
99
are specific to an observing system and test for instrument failure or known biases in certain data types. Other pre-analysis quality control procedures are common to more than one data type. In this category, background field checks and cross validation analyses are particularly important. Pre-analysis procedures in common are described first (Sects.€4.1 and 4.2), followed by descriptions of procedures unique to specific data types (Sects.€4.3 through 4.8).
4.4.1 Background Field Check Background fields used to quality control ocean data include climatology, shortterm forecasts, and global or regional analyses. In all cases, appropriate background error variances must be used. Background and background error fields valid at the observation sampling time are interpolated to the observation location. An innovation is formed (observation minus background) and normalized by the error estimate of the background field. Assuming errors are normally distributed, the probability the observation contains a random error is computed by, x √ − 1 (x−µ)2 /σ 2 P (x ≤ X) = (σ 2π)−1 e 2 dx (4.1) −∞
where x is the observed value, μ is the background value, σ is the background error standard deviation, and p is the area to the left of X beneath the standardized normal probability curve. Histograms should be examined and formal statistical tests performed to show that the normalized background innovations are indeed normally distributed in order to use the probability of error values to accept or reject observations in the quality control decision making algorithm (see Sect.€5). As an example, Fig.€ 4.2 shows frequency distributions of global and regional analyses and climate innovations for a 6-hour data cut of SST retrievals from two different satellites (AMSR-E and METOP-A). The shapes of the innovation histograms for the analysis backgrounds clearly resemble normal distributions, albeit with different variances. The climate background histograms, however, are skewed with a long positive tail. This feature is most notable in the METOP-A data, and likely indicates SST retrievals from diurnal warming events that are not represented in the climate fields.
4.4.2 Cross Validation Cross validation compares observations against other nearby data. A variety of methods are used to make these comparisons. The most common approach is to perform an optimum interpolation (OI) analysis at the observation location and sampling time using nearby validated data, excluding the datum being checked. The innovations
100
J. A. Cummings $065(
*OREDO
5HJLRQDO
&OLPDWH
0(723/$&
*OREDO
5HJLRQDO
&OLPDWH
Fig. 4.2↜渀 Geographic coverage charts and histograms of AMSR-E and METOP LAC retrieved SST minus global (↜red) and regional (↜green) analysis and climate (↜blue) backgrounds. The AMSR-E data cut processed 1,369,870 observations on 28 Dec 2009 at 18Z. The METOP LAC data cut processed 2,281,094 observations on 10 SEP 2009 at 01Z. Daytime retrievals are indicated as blue and nighttime retrievals are indicated as green points in the geographic coverage charts. The histograms are formed using 0.25ºC temperature difference bins
for the cross validation are computed from an ocean climatology. It is important to ensure that cross validation checks are data driven and independent of any analysis or forecast model backgrounds. The uncertainty of the analyzed value is computed from the OI analysis error reduction of climate variability. The cross validation analyzed value and its uncertainty are then used as the background and background error values in the background field check described in Sect.€4.1. In the absence of any nearby valid data, the cross validation procedure simply returns climate and climate variability as the analysis and error estimates, and the cross validation check is identical to the background check using climatology. Thus, cross validation is analogous to checking observations against a dynamic, time-dependent climatology. The background error covariances used in the cross validation procedure can be very simple, such as only including data within some specified distance from the observation being checked, or more complicated, based on the multivariate covariances used in the assimilation procedure itself. Cross validation can be applied to all observation data pairs in the quality control or it can be preceded by other data checks which first detect suspect observations. The cross validation is then performed only on the suspect observations to save on computational time. In data sparse areas the cross validation check will have limited effect. However,
4â•… Ocean Data Quality Control
101
the continuing development of the Argo profiling float array generally provides an adequate number of nearby data to allow the cross validation of profile observations to work well in practice. Cross validation is also useful in the quality control of altimeter SSH and SWH observations, since individually those data tend to be rejected along sequential segments of altimeter tracks due to phase errors in the model background fields.
4.4.3 Ship and Buoy Sea Surface Temperature Volunteer observing ship (VOS) temperatures have very different error characteristics depending on the measurement method. Hull contact sensor measurements of temperature appear to be the most accurate followed by engine room intake and buckets. However, all ship-based measurement systems are prone to error since the on-board instruments are rarely calibrated. In general, ship-based SST measurements are noisy and observations from engine-room-intake instruments tend to be warm biased, while bucket measurements are biased toward cooler temperatures. In addition, there appears to be some geographic dependence in ship SST errors, with errors higher in the Pacific than in the Atlantic. Drifting buoy measurements of SST are very important since the buoys are globally distributed and have a relatively long life. In general, drifting buoy SST measurements are of high accuracy and high quality. Occasionally, spurious drifter locations are received, but these are usually detected in the location speed check. In general, buoy SST measurements are quality controlled by the background field checks using climatology or analysis fields. Since drifting buoys and ships are identified by unique call signs the time history of individual instruments can be monitored for indications of drift and calibration errors. Drifters are deployed with holey-sock drogues to a depth of ~15€m. Monitoring surface drifters for drogue loss is important, because loss of a drogue changes the sampling characteristics of the drifter.
4.4.4 Satellite Sea Surface Temperature Infrared and microwave satellite SST retrievals measure very different properties of the sea surface, requiring unique quality control procedures. In the sections to follow, residual cloud and aerosol contamination quality control tests are applied to infrared SST retrievals. Diurnal warming detection is performed for daytime retrievals from both infrared and microwave satellites. 4.4.4.1â•…Residual Cloud Contamination Infrared SST measurements are derived from radiometric observations at wavelengths of ~3.7€µm and 11–12€µm. Though the 3.7€µm channel is more sensitive to
102
J. A. Cummings
SST, it is primarily used only for night-time measurements because of the relatively strong reflection of solar irradiation in this wavelength region during daytime, which contaminates the retrieved radiation. The infrared wavelength bands are sensitive to the presence of clouds and atmospheric water vapor. For this reason, thermal infrared measurements of SST first require atmospheric correction of the retrieved signal and can only be made for cloud-free pixels. However, cloud clearing algorithms are far from perfect and atmospheric water vapor variations are significant. In addition, satellite zenith angle plays a role in determining SST errors, since the atmospheric path length over which the radiation is observed is longer at higher zenith angles. Residual cloud contamination errors are manifested as cold biases. Detection of these errors is performed by the background field check (Sect.€4.1). 4.4.4.2╅Aerosol Contamination Satellite sea surface temperature retrievals from infrared radiometers are known to be prone to bias when significant amounts of aerosol are present in the atmosphere. Retrievals are degraded by the presence of tropospheric aerosols, as are cloud detection tests that depend upon accurate visible and infrared channel measurements. In particular, desert dust particles are large enough to attenuate and contribute to the infrared signal emitted from the ocean surface before it reaches the satellite sensor in space. Saharan dust events are common in the eastern tropical Atlantic and Mediterranean Sea. Saharan dust is lifted by convection over hot desert areas, and can reach very high altitudes; from there it can be transported over the ocean by winds, covering distances of thousands of kilometers. The dust combined with the hot dry air of the Sahara Desert has significant effects on tropical weather, especially as it interferes with the development of hurricanes. In Eastern Asia, mineral dust events originate in springtime in the Gobi Desert (Southern Mongolia and Northern China). The aerosols are carried eastward by prevailing winds, and pass over China, Korea, and Japan, sometimes as far as the western United States. Thus, the impact of atmospheric aerosols on infrared SST retrievals is a cold bias that is a global problem. The current limiting factor for dealing with aerosol contamination in satellite SST retrievals is accurate knowledge of the characteristics and amount of the aerosol at the coincident time and location of the satellite SST retrievals. This information is available in the daytime for the anti-solar side of the scan in the visible channels of the instrument (~25% of the data), but there is no information at night or on the solar side of scan due to the effects of sun glint (~75% of the data). However, aerosol transport models can be used to provide the necessary information. In particular, the Navy Aerosol Analysis Prediction System (NAAPS) provides 3-hourly, aerosol optical depth (AOD) forecasts for four aerosol sources (dust, smoke, sulphate, and sea spray) at 19 different wavelengths; 14 of the NAAPS aerosol optical depth wavelengths match the channels used in satellite SST retrieval algorithms. The wavelength dependent, global NAAPS optical depth products are used in a canonical variate analysis to detect aerosol contamination. Canonical variate analy-
4â•… Ocean Data Quality Control
103
sis finds the linear combination of observed variables that maximize the ratio of between-group to within-group variation. There are five groups of infrared satellite SST retrievals: four groups are defined with varying levels of aerosol contamination and one group is free from aerosol contamination. The canonical variates are then used to discriminate between the groups. Separate canonical variate functions have been computed for day versus night retrievals and for different geographic areas. Let B be the between group covariance matrix and W the within group covariance matrix. Linear variate functions (λ) are found to maximize,
v = λ Bλ/λ W λ
(4.2)
(B − vW )λ = 0.
(4.3)
which represent the ratio of between group to within group variances. Differentiating Eq.€(4.2) and setting to zero gives,
The eigenvalues of W B and the corresponding eigenvectors (λ) are the canonical variate functions used for discrimination. The observed channel brightness temperatures and NAAPS wavelength dependent AOD components are projected onto the canonical variates and the Euclidean distances to the projected group means is determined. The SST retrieval is classified as contaminated if the distance to a contaminated group mean is closer than the distance to the non-contaminated group mean according to, –1
κ = min j
r i=1
[λi (x − µj )]2
(4.4)
where x is the AOD wavelength components and AVHRR channel brightness temperatures vector for a given SST retrieval, r is the number of canonical variate functions, μj is the group mean vector of observed values for the contaminated and noncontaminated groups, and κ is the group classification code. The group assignment probability is computed assuming that group distances are chi-square distributed with r-1 degrees of freedom. Satellite SST retrievals assigned to a contaminated aerosol group can either be flagged for rejection or corrected using radiative transfer modeling that takes into account the height distribution of the aerosol plume in NAAPS and the vertical distribution of temperature from an atmospheric forecast model (Merchant et€al. 2006). 4.4.4.3â•…Diurnal Warming Surface diurnal warming events are common in the world oceans. The warming events produce near-surface thermal gradients that create daytime near-surface or warm-layer temperatures 2–4ºC warmer than nighttime (Donlon et€al. 2002). Although not strictly a measurement error, combining SST measurements with different observation times in a daily analysis requires consideration of diurnal warming events. Knowledge of diurnal warming events, in turn, requires information on
104
J. A. Cummings
the local time history of the wind speed and surface solar radiation at the time of the SST observation. However, often only instantaneous measures of surface wind speed and solar radiation fields from NWP systems are collocated with satellite SST retrievals. Nevertheless, detection of diurnal warming and potential skin-layer effects in satellite SST retrievals is still possible given the presence of low winds, high solar insolation, and a positive, statistically significant change in SST from a background field valid within ~6€hours of the observation time. 4.4.4.4â•…Microwave SST Due to lower signal strength of the radiation curve in the microwave region, accuracy and resolution is poorer for SST derived from passive microwave measurements as compared to SST derived from thermal infrared measurements. However, the advantage gained with passive microwave is that radiation at these longer wavelengths is largely unaffected by clouds and generally easier to correct for atmospheric effects. Phenomena which do affect passive microwave signal return, however, are windgenerated roughness at the ocean surface and precipitation. These affects can usually be corrected for using multiple frequencies. SST measurements are primarily made at a channel near 7€GHz with a water vapor correction enabled by observations at 21€GHz. Other frequencies used for correction of surface roughness (including foam), precipitation, and what little effect clouds do have on microwave radiation, include information in the 11, 18, and 37€GHz channels. Nevertheless, rain contamination continues to be a problem at the edge of rain cells, where there is often undetected rain that cause biased SST retrievals. Land contamination is also an issue with microwave measurements. Within 50–100€km of land microwave measurements are affected by emissions from land resulting in a warm bias in coastal microwave SST. For this reason, microwave SST observations are typically not produced within 100€km of land.
4.4.5 Sea Ice Concentration A problem with sea ice concentration retrievals from the SSM/I and SSMIS sensors on board the DMSP series of satellites is false indication of sea ice over the open ocean and at the ice edge. These spurious sea ice concentrations result from the presence of atmospheric water vapor, non-precipitating cloud liquid water, rain, and sea surface roughening by surface winds. While these effects are relatively minor at polar latitudes in winter, they result in serious weather contamination problems at all latitudes in summer. The various sea ice retrieval algorithms used operationally attempt to eliminate these false positive sea ice concentrations, but with limited success. Accordingly, prior to assimilation, sea ice concentrations need to be quality controlled using a weather filter. A good proxy for a weather filter is SST. If the sea ice concentration is greater than zero and the collocated SST exceeds 4°C, then it is likely the sea ice retrieval is contaminated by a weather event and should
4â•… Ocean Data Quality Control
105
be rejected. The 4°C SST threshold is considered to be very conservative. Tests with a 1°C SST threshold resulted in spurious rejections of sea ice retrievals in the East Greenland Current during periods of rapid ice growth, when the analyzed SST fields are not accurate due to a lack of SST observations. A cross validation check will not work as a weather filter, since weather contamination is typically large scale affecting many near-by sea ice retrievals simultaneously. Sea ice retrieval algorithms also return false positive ice conditions near land due to land contamination of the microwave signal. This bias is most evident during the summer ice melt season in the northern hemisphere when the Arctic land boundaries become ice free. A high resolution distance from land database is used to check if the retrieval is within 100€km of land. The test uses retrieval distance from land, background field anomalies, collocated SST, and a cross validation check of nearby locations to determine if positive sea ice observations near land are valid. Land contaminated sea ice retrievals are typically rejected at this point.
4.4.6 Temperature and Salinity Profiles Profile observations are first checked for duplicate depths and strictly increasing depths. Reported levels that fail these tests are flagged and not used in the following profile quality control procedures. 4.4.6.1╅Instrumentation Error Checks Special instrument specific error tests are applied to profile observations to identify errors that have unique profile signatures. These errors include temperature inversions at the bottom of the profile, spikes in the temperature profile, and positive temperature gradients (warm bulge) in the mixed layer. The instrumentation error checks are applied iteratively until all errors are found, since a profile may have one or more of these types of errors. Reported temperature-depth levels that contain instrumentation errors are flagged and not used in the next iteration of the instrumentation error checks. One difficulty with the current suite of profile instrumentation error checks is that the tests are designed to detect errors specific to expendable bathythermographs (XBT) (Bailey et€ al. 1994). Other profile data types, such as Argo floats, gliders and CTD probes, are likely to have failure modes that are different from a XBT. Automated quality control tests need to be developed to detect instrumentation errors in these data types as more experience is gained with their assimilation. 4.4.6.2╅Static Stability A static stability test is performed to detect density inversions in profile observations. The reported in situ temperature and depth data pairs are first converted to
106
J. A. Cummings
potential temperature and pressure, and then potential density is computed at each pressure level using observed or derived salinity values. Salinity observations are generated for profiles that report only temperature. Salinity is computed from observed temperature values using bi-monthly climatological temperature-to-salinity regression models that have been computed on a global 0.25° resolution grid. The potential density profile is examined for inversions (higher density shallower than lower density), and observed temperature and salinity profile levels with inversions that exceed a minimum specified inversion threshold of 0.025€kg€m−3 are flagged. For profiles with derived salinities, static instabilities are corrected by iteratively adjusting the derived salinity until the resulting profile is neutrally buoyant. Salinity is removed from the top of the permanent thermocline upward and added from that depth downward in the adjustment. The salinity correction algorithm is not applied to density inversions for profiles that observe both temperature and salinity levels, since it is difficult to determine a priori if the cause of the density inversion is due to the reported temperature or the reported salinity value. In this case profile levels with density inversions are simply flagged.
4.4.6.3â•…Vertical Gradient Checks A global climatology of vertical mean temperature differences and standard deviations about these means has been computed from the historical profile archive. The climatology is used to test observed vertical temperature gradients for outliers. First, the climate temperature differences and variability are interpolated to the observation location and sampling time. Second, the vertical temperature differences are converted to vertical temperature gradients and interpolated to the observed profile levels. Observed vertical temperature gradients are computed, and the difference between the observed and the expected mean vertical gradient from the climatology is standardized by the expected gradient variability,
z = (To · m−1 − Tc · m−1 )/σ
(4.5)
where To€·Â€m−1 is the observed vertical gradient, Tc€·Â€m−1 is the climate mean vertical gradient, σ is the variability about that mean, and z is the standardized vertical gradient variate. If the observed profile gradient exceeds 0.2°C·m−1 and |z|â•›>â•›4, then the profile level is flagged. Experience has shown that the vertical gradient test based on climate statistics tends to spuriously flag as erroneous profile levels associated with a strong thermocline. This problem is particularly acute in the tropics. Truly erroneous profile vertical gradients are often associated with bad temperature or salinity observations, which are detected in the spike test and background field checks described previously. Hence, at the present time, quality control flags set by the vertical gradient check are flagged and only used for informational purposes (Sect.€5).
4â•… Ocean Data Quality Control
107
4.4.6.4â•…Profile Shape Comparisons Observed profiles are compared to profiles extracted from the various background fields using a profile shape quality control procedure. This procedure has the advantage of taking an overview of the entire profile. Profile levels that have previously been determined to be unreliable based on other profile quality control data checks are excluded in the profile shape quality control procedure. The shape quality control procedure computes an integrated observed-minus-predicted statistic that takes into account level thicknesses. The test statistic is calculated as, η = ((O − P )/σ ) · (z − z )/ (zk+1 − zk−1 ) (4.6) k k k k−1 k+1 k
k
where Ok is the observed value at level k, Pk is the prediction (background) value at level k, σk is the prediction error standard deviation at level k, and zk is the depth of level k. The probability of η being greater than zero is computed assuming a normal probability distribution function. The shape comparison statistic is analogous to a goodness-of-fit test of two cumulative distribution functions. It identifies observed profiles with large errors relative to the background profiles. Profiles that have large temperature or salinity differences over narrow depth ranges, such as dissimilar mixed layer depths, will be considered similar. Observed profile shape must be consistent with forecast and climate background profiles in order for the profile to be accepted into the analysis. 4.4.6.5â•…Gliders Ocean gliders are autonomous platforms which fly in a saw-tooth-sampling pattern in the upper ocean by changing their buoyancy. Depending upon configuration, gliders sample profiles of pressure, temperature, and conductivity. The gliders surface at regular intervals to transmit their observations to shore or satellite based receivers. Gliders provide both downward and upward profiles of temperature and salinity, with glider position and time varying with depth during the dive. Quality control of glider data is similar to that of single profile data, other than relaxation of the strictly increasing depths check. However, several glider specific tests are performed that are, in most cases, functions of the vertical velocity of the glider. These tests are applied to gliders with a non-pumped CTD, where flow through the conductivity cell depends upon the speed of the glider, making the thermal inertial correction speed-dependent.
4.4.7 Altimeter Sea Surface Height The along-track altimeter data undergo an extensive series of pre-processing steps to prepare the data for use in the assimilation. The measured sea surface height
108
J. A. Cummings
(SSH) is corrected for geophysical effects (wet and dry troposphere, ionosphere, inverted barometer, and winds), and the tidal signal is removed. The corrected SSH from each satellite altimeter mission are then intercalibrated with a global crossover adjustment using Topex/Poseidon data as the reference. Next, the data are resampled every 7€km (1€sec intervals) along the tracks. A mean SSH is removed from the individual SSH measurements producing sea surface height anomalies (SSHA). The mean SSH contains both the unknown geoid signal and the mean dynamic topography over the averaging period. For most satellite missions a mean SSH calculated over a 7-year period is used, although the averaging period continues to be extended in time as the altimeter satellite missions continue. These altimeter preprocessing steps are typically performed by the data provider. Altimeter SSHA observations are of lower accuracy or are not interpretable near the coasts due to inaccurate tidal corrections and incorrect removal of atmospheric wind and pressure effects at the sea surface in shallow water. The coastal region for altimeter data assimilation is often defined as everywhere shallower than 400€ m depth. Altimeter observations also have significant along-track correlated errors that must be taken into account in the assimilation. The along-track altimeter SSHA data are very noisy at the full 7€km resolution. Accordingly, altimeter SSHA data are smoothed along-track using a median or Lanczos filter to reduce the measurement noise. In addition, the altimeter data are often sub-sampled or bin-averaged to remove redundant observations. Finally, altimeter SSHA measurements are scaled by a hyperbolic tangent operator using local dynamic height variability limits that have been computed from the historical profile archive. This operation attempts to remove spurious altimeter SSHA outliers and maintain the data within the range of known baroclinic variability limits. A final issue with altimeter SSHA measurements is the fact that different versions of the data are reported in both near real-time and in delayed mode. Real-time SSHA observations are computed using less precise, predicted orbits rather than the more precise, observed orbits, which are not available for several days after real-time. Although less precise, real-time SSHA observations still have significant value in the analysis. However, when the more precise delayed mode SSHA observations are available, the corresponding real-time SSHA data should be identified and replaced by the delayed mode data. This procedure ensures that the higher quality, delayed mode SSH observations are incorporated into the altimeter SSHA data archive for use in hindcast studies. Satellite altimeter SSHA observations are a critical data source in GODAE assimilation systems and timely access to the most complete, highest quality data is essential.
4.4.8 Altimeter Significant Wave Height Comparisons with buoy data show that altimeter SWH estimates are in agreement with the in situ data, with standard deviations of differences on the order of 0.30€m, but the satellite data tends to slightly overestimate low SWH and slightly underestimate high SWH. The altimeter SWH data thus needs to be bias corrected before
4â•… Ocean Data Quality Control
109
being used in the assimilation. These bias corrections are generally linear and are derived from altimeter/buoy matchups that correspond to corrections of only a few percent of SWH. Altimeter SWH data can also be contaminated by sea ice or land. Elimination of these data requires a contemporaneous sea ice concentration field, either from a forecast model or an analysis of SSM/I and SSMIS sea ice retrievals. The land mask needs to resolve the along-track 7-km footprint of the altimeter SWH data.
4.5╅Quality Control Decision-Making Algorithms The quality control outcomes of the various external data checks described previously are combined in a decision-making algorithm. The outcome of the decisionmaking algorithm is the overall indication of observation quality, which is used to select data for the assimilation. The decision-making algorithm is applied to each observed reporting level and, in the case of profile (and glider) observations, to the entire profile in the shape comparison test. Thus, for profile observations there are two indicators of data quality: one indicator for the overall profile shape, and a second indicator for each profile level. It is important to take into consideration results from all of the external data checks before the final quality decision is made. For example, an observation could fail the climate background check while at the same time pass the forecast background check. The observation would be rejected if the climate test was applied first in a serial fashion. Quality control decision-making algorithms, therefore, are necessarily complex and must combine outcomes from the different external test results appropriately. It helps if the external test outcomes are of the same form, such as probabilities or standard normal deviates. A quality control decision-making algorithm in use at the U.S. Navy oceanographic centers is described here. The quality control outcomes from the various external data checks are in the form of probabilities of error. The majority of these probabilities are calculated according to Eq.€(4.1), assuming a normal probability density function, but probabilities are also calculated using chi-square distribution functions (i.e., aerosol contamination test). Given a set of error probabilities the decision-making algorithm is summarized as follows:
Pb = min (Pg , Pr ) Pd = min (Pc , Px ) Pb < τf , Po = Pb Pb > τf , Po = min (Pb , Pd )
(4.7)
where pb is the composite background error probability, pd is the composite dataderived error probability, pg and pr are the global and regional forecast background error probabilities, pc and px are the climate and cross validation error probabilities, τf is the forecast error threshold probability, and po is the overall probability the observation contains a random error. The forecast error probability threshold for the system is typically set to 0.99 (3 standard deviations). The algorithm first determines if the observation is consistent with the model background fields by taking the minimum error probability of the global and re-
110
J. A. Cummings
gional forecasts. If the minimum background error probability is less than the prescribed forecast error tolerance limit, then the algorithm returns it as the overall probability of error for the observation. However, if the minimum model background error probability exceeds the forecast error threshold, then it is compared against the data-derived error defined as the minimum of the cross validation and climatology error probabilities. The overall observation error probability is returned as the minimum of the composite background and composite data-derived errors. In this way, cross validation and climate backgrounds determine data quality only if the observation is not consistent with the forecast. Experience has shown that requiring observations to always be consistent with climate backgrounds results in spurious rejection of valid observations during extreme events. Once the overall probability of error for an observation has been determined, output from the various specific observing system quality control tests are simply added to the error probability using unique integer-valued flags. The quality control flags have three levels of severity: (1) information-only (<100); (2) cautionary (≥100); and (3) fatal (≥1,000). Observations with fatal errors are not used in the analysis. Information-only flagged observations are routinely used in the analysis, but the use of cautionary flagged observations is under user control via analysis namelist options. The ultimate decision to accept an observation into the analysis, however, is always based on the underlying error probability value obtained from the decision-making algorithm. If quality control flags have been appended, the underlying probability of error can always be recovered from the summation using some simple modular arithmetic.
4.5.1 Quality Control System Performance Output from the U.S. Navy’s fully automated real-time ocean data control system is summarized for satellite SST retrievals, sea ice concentration retrievals, altimeter sea surface height and significant wave height retrievals, and in situ observations at the surface and at depth from various sources. Quality control output for the satellite data is given for two monthly time periods during 2009 (June and December) to allow for examination of possible effects of seasonality, while output from quality control of the in situ data is shown for the entire 2009 year. The overall quality of the observations is summarized using an error probability frequency of occurrence in per cent. The error probabilities are the outcomes of the quality control decisionmaking algorithm for single level observations and the overall probability of error for profile observations. Assuming a normal probability distribution function, the frequency of occurrence bins correspond to one standard deviation (pâ•›≤â•›0.67), two standard deviation (pâ•›≤â•›0.95), and three standard deviation (pâ•›≤â•›0.99) departures from a zero mean. Probability frequencies indicated as pâ•›≤â•›1.0 include probabilities greater than 0.99 plus observations flagged as being suspect from one or more of the specific external data checks described previously. Observations with error probabilities less than 0.99 are typically accepted into the analysis.
4â•… Ocean Data Quality Control
111
In general, QC outcomes of the satellite SST retrievals indicate that the data are of good quality (Table€4.1). The frequencies of error probabilities within one standard deviation of the background field consistently include 90% or more of the data for all satellite systems. Allowing for two background error standard deviations results in more than ~99% of the observations being included. There is some evidence Table 4.1↜渀 Real-time QC outcomes for satellite SST retrievals Satellite
Month Type Countâ•›× Diurnal Aerosol1 pâ•›≤â•›0.67 pâ•›≤â•›0.95 pâ•›≤â•›0.99 pâ•›≤â•›1.0 2009 106 2 – 87.82 – – 96.2 3.7 0.1 0.1 AMSR-E Jun Dec Day 47.68 23,427 – 94.5 5.3 0.2 0.1 Night 55.59 – – 95.5 4.3 0.1 0.0 AATSR3 Jun Day 220.35 364,910 30,656 93.0 6.3 0.5 0.3 Night 330.58 – 195,971 91.2 8.4 0.4 0.1 Dec Day 230.32 161,863 8,391 95.0 4.7 0.2 0.1 Night 317.16 – 42,313 91.9 7.6 0.4 0.0 Day 26.93 258 12 89.8 10.1 0.1 0.0 GOES-11 Jun Night 70.84 – 4 95.2 4.7 0.1 0.0 Dec Day 37.67 97.6 2.3 0.0 0.0 Night 88.80 95.8 4.1 0.1 0.0 Day 19.06 1,043 7,083 96.7 3.2 0.1 0.0 GOES-12 Jun Night 53.33 – 435,078 93.3 5.7 0.2 0.8 Dec Day 27.44 1,014 49 95.4 4.6 0.0 0.0 Night 66.30 – 12,519 93.1 6.7 0.2 0.0 Day 5.46 938 2,541 97.6 2.3 0.1 0.1 METOP Jun GAC Night 5.63 – 5,462 94.7 5.0 0.2 0.1 Dec Day 6.09 862 35 97.5 2.4 0.1 0.0 Night 5.89 – 144 95.4 4.4 0.2 0.0 METOP Jun Day 106.52 28,165 86,935 96.2 3.5 0.1 0.1 LAC4 Night 119.47 – 44,456 95.5 4.4 0.2 0.0 Dec Day 216.67 20,350 3,312 97.4 2.5 0.1 0.0 Night 234.74 – 9,060 94.5 5.3 0.2 0.0 MSG5 Jun Day 14.47 2,995 10,202 94.8 4.5 0.4 0.3 Night 73.28 – 13,343 94.8 4.8 0.2 0.2 Dec Day 12.23 25,999 759 95.3 4.2 0.2 0.3 Night 11.55 – 3,082 94.9 4.9 0.2 0.0 Day 4.71 148 14 90.7 8.6 0.6 0.1 NOAA-18 Jun Night 5.24 – 5,072 95.3 4.4 0.2 0.1 Day 5.08 11,919 36 88.9 10.2 0.6 0.3 NOAA-19 Dec Night 4.99 – 298 95.4 4.4 0.3 0.0 1 Aerosol contamination calculated for Saharan Dust events in an area bounded by 10°S–30°N, 25°E–55°W 2 AMSR-E not partitioned into day/night retrievals in June. AMSR-E data missing 16–17 June 06Z, 18 June 00–12Z, 20 June, 23 June, 25–26 June, 28–30 June, 29 Dec 12–24Z 3 AATSR data missing 16 June 00–06Z, 20 June 00–06Z, 26 June 00–18Z, 28 June 00–12Z, 29 June 12–18Z. 8 Dec 00–06Z, 24 Dec 06–12Z 4 METOP LAC data missing 19 Dec 00–12Z; 27 Dec 18–24Z 5 MSG data missing 6 June 12–18Z, 13 June 00–06Z, 15 June 00–06Z, 16–18 June, 20–21 June 00Z, 22 June 06–12Z, 23–24 June 00Z, 25–30 June, 15 Dec 06–12Z
112
J. A. Cummings
Table 4.2↜渀 Real-time QC outcomes for satellite sea ice retrievals pâ•›≤â•›0.95 pâ•›≤â•›0.99 Satellite1 Month 2009 Countâ•›×â•›106 Weather pâ•›≤â•›0.67 Filter2 F133 Jun 5.23 570 97.1 2.1 0.5 Dec – – – – – Jun 10.65 2,777 96.2 2.6 0.7 F15 Dec 11.63 1,048 94.5 3.6 1.1 Jun 16.78 17,070 96.5 2.4 0.6 F16 Dec 18.32 3,478 95.3 3.3 0.9 Jun 16.64 13,687 97.1 2.1 0.4 F17 Dec 18.87 3,652 95.5 3.2 0.8 0.65 – 77.6 10.4 5.8 Shelf Ice Jun Dec 0.44 – 74.7 16.7 5.7 1 F13 and F15 are SSM/I satellites; F16 and F17 are SSMI/S satellites 2 Weather filter based on collocated analyzed SST values (see text for details) 3 F13 data use discontinued in December
pâ•›≤â•›1.0 0.3 – 0.5 0.8 0.5 0.6 0.3 0.5 6.1 2.9
of seasonality in the number of retrievals detected as coming from diurnal warming and aerosol contamination events for AATSR, GOES, METOP and MSG data. Sea ice concentration retrievals from the SSMI and SSMI/S satellites are also of good quality: ~99% of the data fall within two standard deviations of the background field (Table€ 4.2). The number of sea ice retrievals rejected by the weather filter based on collocated SST shows a clear seasonality with many more weather filter rejections in June than in December. Altimeter sea surface height (SSH) observations are also of good quality with ~99% of the data within two standard deviations (Table€4.3). Altimeter significant wave height (SWH) observations appear to be of lower quality, but SWH rejections are mostly over land or ice covered seas (defined here as 33% sea ice concentration). Quality control of altimeter SWH retrievals is model based in the Navy system. A 6-hour forecast from a data assimilative run of the wave model is used to check newly received altimeter and buoy SWH observations for consistency, ensuring that the valid time of the forecast corresponds closely to the observed times of the data. Table€4.4 gives QC outcomes for in situ SST observations from ships and buoys. Ship data are of lower quality than buoy data, with about 8% of the ship data being rejected across the different ship data types. Drifting buoy data are of higher quality than fixed buoys, with fixed buoy data showing increased variability as indicated by the large percentage of data in the probability range of 0.67–0.95. Profile data QC is summarized in Table€4.5. Recall that profile levels with density inversions or vertical gradient information-only flags do not affect use of those data in the assimilation. The large number of TESAC data is a result of fixed buoys reporting both temperature and salinity using the WMO TESAC code form. These data report only a single or very few vertical levels and are of low quality, with less than 75% of the data occurring within two standard deviations of the background field. XBT observations have large occurrences of vertical gradient and instrumentation
Table 4.3↜渀 Real-time QC outcomes for satellite altimeter retrievals Type Month Ice Covered Satellite1 Countâ•›×â•›106 2009 SSH Jun 1.32 – ENVISAT Dec 1.39 – SWH Jun 0.87 106,945 Dec 1.36 48,265 SSH Jun 1.49 – Jason 1 Dec 1.63 – SWH Jun 1.48 73,119 Dec 2.02 12,971 SSH Jun 1.55 – Jason 2 Dec 1.66 – SWH Jun 1.48 95,920 Dec 2.30 4,735 1 SWH observations not available 1–10 June 2 Zero values are SWH retrievals reported as exactly zero Shallow Water – – 1,483 1,589 – – 68 4 – – 1,960 1,816
Zero Value2 – – – 8,931 – – – 27,754 – – – 27,767
Land Area – – 12,631 25,094 – – 21,664 38,751 – – 21,286 34,629
95.7 95.9 70.1 68.7 86.7 87.9 80.5 84.5 88.3 89.1 65.3 67.6
pâ•›≤â•›0.67 4.1 3.9 3.3 3.0 12.5 11.3 10.8 6.6 11.1 10.3 3.2 2.9
pâ•›≤â•›0.95
0.2 0.1 0.1 0.1 0.8 0.7 1.2 0.5 0.6 0.6 0.1 0.2
pâ•›≤â•›0.99
0.0 0.0 26.5 28.2 0.1 0.1 7.5 8.4 0.0 0.0 31.4 29.3
pâ•›≤â•›1.0
4â•… Ocean Data Quality Control 113
114
J. A. Cummings
Table 4.4↜渀 Real-time QC outcomes for in situ surface temperature observations in 2009 Type Countâ•›×â•›103 pâ•›≤â•›0.67 pâ•›≤â•›0.95 pâ•›≤â•›0.99 pâ•›≤â•›1.0 210.3 55.5 27.0 9.0 8.5 Ship ERI 32.1 47.2 31.4 12.6 8.8 Ship Bucket 309.2 53.6 28.5 10.2 7.7 Ship Hull Contact 23.6 72.1 20.6 5.0 2.2 CMAN Station 2,657.3 83.5 13.3 2.6 0.7 Fixed Buoy 10,624.1 92.3 5.8 0.9 1.0 Drifting Buoy
Table 4.5↜渀 Real-time QC outcomes for profile observations in 2009 Type Depth Missing pâ•›≤â•›0.67 pâ•›≤â•›0.95 pâ•›≤â•›0.99 pâ•›≤â•›1.0 Count1â•›× Density Vertical Inst. Inv.2 Grad.2 Error2,3 Error2 Value2,4 103 18.9 – 12,722 52,301 674 26 75.9 16.2 1.6 6.3 XBT 502.5 19,000 3,922 – – 1,163 81.3 16.3 1.5 0.9 Fixed Buoy 207 5,743 6,374 – – 84.3 8.1 1.9 5.7 Drifting 31.7 Buoy 2,165 1,706 551 222 44.0 29.3 10.0 16.7 TESAC 1,332.4 1,382 148.2 9,028 8,801 6,669 4,628 7,158 77.9 18.3 1.7 2.1 Argo 1 Counts are number of profiles 2 Counts are number of profile levels affected 3 Instrumentaton error includes wire stretch, wire breaking, invalid upper ocean temperature response, profile spikes 4 Counts refer to missing temperature levels only
errors, which are probably due to inflexion point decimation of the profiles done prior to posting the data on the GTS. Argo is of high quality with more than 96% of the profiles accepted into the analysis. However, Argo profiles show a relatively high occurrence of depth errors (duplicate depths or depths not strictly increasing) and missing value errors (defined here in terms of temperature) that need to be investigated.
4.6â•…Internal Data Checks Internal checks are those quality control procedures performed by the analysis system itself. These data consistency checks are best done within the assimilation algorithm since it requires detailed knowledge of the background and observation error covariances, which are available only when the assimilation is being performed. The internal data checks are the last defense of the assimilation algorithm against bad observations. Data that contain gross and random errors have hopefully been removed prior to the assimilation in the sensibility and external data checks. The purpose of the internal data checks is to decide whether any marginal observations remaining in the assimilation data set are acceptable or unacceptable.
4â•… Ocean Data Quality Control
115
The need for quality control at this stage of the analysis/forecast system cannot be over emphasized. Any assimilation system based on the assumption of normality, no matter how sophisticated, is vulnerable to bad observations that do not fit a normal distribution. Further, since many GODAE forecasting systems use a sequential analysis-forecast cycle, it is difficult to remove the propagation of error through the forecast period that occurs when erroneous data have been assimilated. Once this happens the only option is to blacklist the bad observations and back-up and rerun the analysis-forecast cycle. This remedy will cause a delay in the production of the forecast, which can be a serious problem in operations since the forecast products are time critical. The internal consistency checks are quite different from the cross validation procedure described in Sect.€4. In particular, each observation is compared with the entire set of observations used in the assimilation, not just nearby observations. A metric is devised to test whether observation innovations are likely or unlikely with respect to other observations and the specified background and observation error statistics. Once the decision to reject an observation is made in the internal data check it is necessary to intervene in the assimilation process to ensure that the rejected observation has no effect on the analysis. Typically, internal data checks are performed in variational analysis schemes, where the solution is obtained using iterative methods that can be interrupted and started up again. The internal data checks described below were developed for the Navy Atmospheric Variational Data Assimilation System (NAVDAS), described in Daley and Barker (2001). These checks have also been implemented in the Navy Coupled Ocean Data Assimilation (NCODA) system (Cummings 2005), which has recently been updated to a 3D variational analysis based on NAVDAS. The discussion below is adapted from Daley and Barker (2001, Chap.€9.3). In an observation based analysis system the analyzed increments (or correction vector) are computed according to,
(xa − xb ) = BH T (H BH T + R)−1 [y − H (xb )]
(4.8)
where xa is the analysis and xb is the forecast model background. In the right hand side of Eq.€(4.8), B is the background error covariance, H is the forward operator, R is the observation error covariance, y is the observation vector, and T indicates matrix transpose. The observation vector contains all of the synoptic temperature, salinity and velocity observations that are within the geographic and time domains of the forecast model grid and update cycle. When the analysis variable and the model prognostic variable are the same type, the forward operator H is simply spatial interpolation of the forecast model grid to the observation location performed in three dimensions. Thus, HBHT is approximated directly by the background error correlation between observation locations, and BHT directly by the error correlation between observation and grid locations. The quantity [yâ•›−â•›H(xb)] is referred to as the innovation vector (model-data misfits at the observation locations). The first part of the internal data check uses a tolerance limit. Denote Aâ•›=â•›HBHTâ•›+â•›R as the observation symmetric positive definite matrix of Eq.€(4.8). Define Aˆâ•›=â•›diag
116
J. A. Cummings
(A). Then, define the observation vector dˆâ•›=â•›Aˆ −1/2[yâ•›–â•›H(xb)]. The elements of dˆ are the normalized innovations and should be distributed (over many realizations) in a normal distribution with a standard deviation equal to 1.0 if the background and observation error covariances have been specified correctly. Assuming this to be the case, tolerance limits (TL) are defined. Since B and R are never perfectly known, it is best to use a relatively high tolerance limit (say, TLâ•›=â•›4.0) in operations. The test statistic is designed to identify a marginally acceptable observation if its element of dˆ is larger than the specified tolerance limit. The second part of the internal data check is a consistency check. It compares marginally acceptable observations with every other observation. The procedure is a logical extension of the tolerance limit check described above. Define the vector d*â•›=â•›A−1/2[yâ•›–â•›H(xb)]. The elements of d* are like those of dˆ, dimensionless quantities normally distributed. However, because d* involves the full covariance matrix A, it includes correlations between all of the observations. By comparing the vectors dˆ and d* it can be shown which marginally acceptable observations are inconsistent with other observations and can therefore be rejected. The d* metric should increase (decrease) with respect to dˆ when that observation is inconsistent (consistent) with other observations, as specified by the background and observation error statistics. The internal data check is illustrated using the example given in Table€4.6 for 3 hypothetical observations considered marginally acceptable on the basis of a prescribed tolerance limit (dˆ) check value of 3.0 (Daley and Barker 2001). The d* metric for the first observation is reduced when additional, correlated (ρâ•›=â•›0.8) observations more accurate than the background (εoâ•›=â•›0.1) are considered. In this case, the suspect observation, rejected individually on the basis of the tolerance limit check, is now determined to be consistent and is retained in the analysis (d*â•›=â•›1.9). However, if the additional data are uncorrelated (ρâ•›=â•›−0.4) while also being accurate (εoâ•›=â•›0.1), then the results indicate the suspect observation is much more unlikely than the tolerance limit check and should be rejected (d*â•›=â•›5.8). Inaccurate observations relative to the background (εoâ•›=â•›2.0) show less sensitivity to correlations among observations but still give the same direction of change (d* vs. dˆ) as the accurate observations. There are difficulties applying the consistency data check in practice since it requires calculating the entire A−1/2 matrix, which is prohibitive for very large problems. Fortunately, there are some good approximations to this calculation that can be used (Daley and Barker 2001). However, other implementation issues remain. To Table 4.6↜渀 Hypothetical test case for internal consistency check (from Daley and Barker 2001)
d1^â•›=â•›d2^â•›=â•›d3^â•›=â•›3.0 |d1*|
ρâ•›=â•›0.8 ρâ•›=â•›−0.4 5.8 1.9 ε0â•›=â•›0.1 3.5 2.4 ε0â•›=â•›2.0 d^, d* defined in text ρ correlation between observations ε observation error normalized by the background error
4â•… Ocean Data Quality Control
117
reject an observation a large constant is added to the appropriate diagonal element of the HBHTâ•›+â•›R matrix. This modifies the matrix in such a way as to effectively prevent the rejected observation from affecting the analysis. However, if this operation is done during the descent iteration then the modified matrix is no longer consistent with the other vectors that have been evolving as part of the conjugate gradient solution. The descent can be restarted (very expensive) or the conjugate gradient solution vectors can be suitably altered to allow the descent to continue. In either case the tolerance limit and internal consistency checks can be applied multiple times during the descent as the solution resolves more and more of the observation innovations. As discussed in Daley and Barker (2001), modifications to this procedure can be made for extreme events when the specified background error statistics are likely to be incorrect. Typically, error statistics in the assimilation are produced by averaging time series of innovations and forecast differences and reflect average, rather than extreme, conditions over the model domain. When changes are occurring in the ocean (such as an eddy shedding or frontal meander event) the background error statistics are likely to be larger than normal. In this case, a tolerance limit specified too low could reject good (and very important) data. One option for dealing with this is make one pass through the tolerance limit check and compute the mode of the dˆ values over some limited subareas of the analysis domain. The mode is a better statistic here because it is less susceptible to outliers than the mean. If the subarea mode is much greater than one, then it can be concluded that there are serious discrepancies between the observations and the background in that area. In such a case, to avoid spuriously rejecting good data, the subarea tolerance limit should be increased beyond the prescribed value.
4.7â•…Adjoint Sensitivities Adjoint-based observation sensitivity, initially developed in Numerical Weather Prediction as an observation-targeting tool, provides a feasible (all at once) approach to estimating observation impact for a large variety of datasets and individual observations. Observation impact is calculated in a two-step process that involves the adjoint of the forecast model and the adjoint of the assimilation system. First, a cost function (J) is defined that is a scalar measure of some aspect of the forecast error. The forecast model adjoint is used to calculate the gradient of the cost function with respect to the forecast initial conditions (∂J/∂xa). The second step is to extend the initial condition sensitivity gradient from model space to observation space using the adjoint of the assimilation procedure (∂J/∂yâ•›=â•›KT∂J/∂xa), where Kâ•›=â•›BHT[HBHTâ•›+â•›R]−1 is the Kalman gain matrix of Eq.€4.8. The adjoint of K is given by KTâ•›=â•›[HBHTâ•›+â•›R]−1HB. The only difference between the forward and adjoint of the analysis system is in the post-multiplication of going from the solution in observation space to grid space. The solver (HBHTâ•›+â•›R] is symmetric or self-adjoint and operates the same way in the forward and adjoint directions. Given
118
J. A. Cummings
an analysis sensitivity vector, observation impact is obtained as a scalar product of the observed model-data differences and the sensitivity of the forecast error to those differences. Observations will have the largest impact on reducing forecast error when the observation influences the initial conditions in a dynamically sensitive area. It is not necessary for the observation to produce a large change (i.e., innovation) to the initial conditions for it to have a large forecast impact (Baker and Daley 2000; Langland and Baker 2004). If the assimilation of an observation has made the forecast issued from the analyzed state more accurate than a forecast valid at the same time but issued from a prior state, then the observation is considered to have a beneficial, positive impact. All assimilated observations are expected to have beneficial impacts on correcting the initial conditions and thereby improving the forecast issued from the analysis. However, if consistent non-beneficial impacts are found for a particular data type or observing system, then that may indicate data quality control issues, such as subtle instrument drift or calibration problems that otherwise are difficult to assess when considering the data in isolation. Thus, the adjoint-based data impact procedure is an effective tool to provide quantitative diagnostics of ocean data quality. The use of adjoint sensitivities in ocean data assimilation and ocean data quality control is still an active area of research and development.
4.8â•…Summary and Conclusions Effective ocean data quality control is a difficult problem. Observations are imperfect and prone to error. Data with errors that are not described by the assimilation system through the error covariance matrices need to be eliminated prior to the analysis. Effective quality control, therefore, requires a set of pre-established, standardized test procedures, with results of the procedures clearly associated with the data values. Effectiveness in turn depends on the reliability of the standard(s) and on the choices made for measuring goodness of fit. The need for observation quality control depends on the use being made of the observations. Users of quality controlled data sets have a wide range of views on the most appropriate standards and on the appropriate “tightness of fit” demanded by the quality control procedures (too tight increases the chance of erroneously rejecting anomalous features; too loose increases the chance of accepting bad data). Indicators of data quality must be useful for determining if the quality controlled observations are appropriate for a particular purpose. In this paper, observation quality control is performed as a prelude to assimilation of the observations in an ocean forecast system. Using this definition, the best ocean data quality scheme is that which leads to the best ocean forecast. It is surprisingly difficult to demonstrate consistent impact from the quality control of individual observations in an analysis/forecast system. Quality control, however, is very important in data monitoring: collection of statistics on the perfor-
4â•… Ocean Data Quality Control
119
mance of observing systems; detection of observing systems that are not performing as expected; and feedback to the data providers so that deficiencies are corrected. An integrated, end-to-end quality control system, therefore, must ensure that results of the quality control procedures are recorded for independent analysis and later use. If the quality control is carried out well, then it can reduce the duplication of effort among the users of ocean data—value added is not lost or misinterpreted. At a minimum, a comprehensive database of raw and processed observed values, independent estimates of the same quantities, and quality control outcomes is needed. The database would be used to look for “unexpected” behavior in observing systems, and allow users and operators of quality control systems to identify systematic problems in order to get errors in the data collection or data transmission corrected. At present, there are few agreed-upon standards for real-time ocean data quality control and very few cases where the procedures and results from the oceanographic centers have been compared. As the GODAE operational oceanographic community continues to develop a range of complex ocean analysis and prediction systems, it is important that procedures be developed for routinely assessing the effectiveness of ocean data quality control and for routinely exchanging statistics from the quality control processes at the operational centers. A start on this process has begun with the GODAE QC intercomparison project (Smith 2003; Cummings et€al. 2009), which initially is focusing on profile data types. The fully automated ocean data quality control procedures described in this paper are limited to observation data types that are routinely assimilated in ocean forecast models. New ocean observing systems continue to be deployed and new failure modes of existing observing systems continue to be identified. Examples of new observing systems include HF coastal radars and microwave measurements of sea surface salinity from space. Examples of new instrument failure modes are the pressure and salinity sensor issues associated with the long-term, autonomous, deployments of the Argo profiling floats. New observation error models need to be developed for the automated quality control of new data types, and existing error models need to be updated to detect, and correct, new instrument failure modes. The validity of existing and new automated quality control procedures must be continually confirmed by formal statistical tests and by examining differences between automated and delayed-mode quality control outcomes on the same observation. The automated quality control system can be considered to have performed well if decisions made on observations in real-time are consistent with decisions made to modify or reject the same observations in delayed mode, where more rigorous scientific and expert manual intervention quality control methods are possible. Delayed mode quality control outcomes of the Argo profiling float array are readily available and can be used in this evaluation. This activity is an integral component of the GODAE QC intercomparison project, which includes participation from the following operational centers: Bureau of Meteorology in Australia, Coriolis Data Center in France, the Integrated Science Data Management Branch in Canada, Fleet Numerical Meteorology and Oceanography Center in the U.S.A, and the Met Office in the U.K.
120
J. A. Cummings
Acknowledgements╇ This work was funded by the National Ocean Partnership Program (NOPP) project, US GODAE: Global-Ocean Prediction with the Hybrid Coordinate Ocean Model, and by the Naval Research Laboratory 6.2 project, Observation Impact Using a Variational Adjoint System. The Program Executive Office for C4I and Space PMW-180 provided additional funding as part of the 6.4 project Ocean Data Assimilation for the Coupled Ocean Atmosphere Mesoscale Prediction System. I acknowledge Mark Ignaszewski from the Fleet Numerical Meteorology and Oceanography Center in Monterey, CA, and Krzysztof Sarnowski from the Naval Oceanographic Office in Stennis Space Center, MS, for their continuing assistance and support in the transition and maintenance of the Navy Coupled Ocean Data Assimilation Quality Control (NCODA_QC) system at the U.S. Navy operational centers.
References Bailey R, Gronell A, Phillips H, Tanner E, Meyers G (1994) Quality control cookbook for XBT Data, CSIRO marine laboratories report 221. http://www.medssdmm.dfo-mpo.gc.ca/meds/ Prog_Int/GTSPP/QC_e.htm Baker NL, Daley R (2000) Observation and background adjoint sensitivity in the adaptive observation targeting problem. Q J Roy Meteor Soc 126:1431–1454 Boyer T, Levitus S (1994) Quality control and processing of historical oceanographic temperature, salinity, and oxygen data. NOAA Technical Report NESDIS 81. p€65 Corlett GK, Barton IJ, Donlon CJ, Edwards MC, Good SA, Horrocks LA, Llewellyn-Jones DT, Merchant CJ, Minnett PJ, Nightingale TJ, Noyes EJ, O’Carroll AG, Remedios JJ, Robinson IS, Saunders RW, Watts JG (2006) The accuracy of SST retrievals from AATSR: an initial assessment through geophysical validation against in situ radiometers, buoys and other SST data sets. Adv Space Res 37(4):764–769 Cummings JA (2005) Operational multivariate ocean data assimilation. Q J Royal Met Soc 131:3583–3604 Cummings JA, Brassington G, Keeley R, Martin M, Carval T (2009) GODAE ocean data quality control intercomparison project. Proceedings, Ocean Obs ’09, Venice, Italy. p€5 Daley R, Barker E (2001) The NAVDAS sourcebook 2001. Naval Research Laboratory NRL/ PU/7530-01-441, Monterey, p€160 Donlon C, Minnett P, Gentemann C, Nightingale TJ, Barton I, Ward B, Murray M (2002) Toward improved validation of satellite sea surface skin temperature measurements for climate research. J Clim 15:353–369 Donlon CJ, Robinson I, Casey KS, Vazquez-Cuervo J, Armstrong E, Arino O, Gentemann C, May D, LeBorgne P, Piollé, Barton1 I, Beggs H, Poulter DJS, Merchant CJ, Bingham A, Heinz S, Harris A, Wick G, Emery B, Minnett P, Evans R, Llewellyn-Jones D, Mutlow C, Reynolds R, Kawamura1 H, Rayner N (2007) The global ocean data assimilation experiment (GODAE) high resolution sea surface temperature pilot project (GHRSST-PP). Bull Am Meteorol Soc 88(8):1197–1213 Langland RH, Baker NL (2004) Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus 56A:189–201 May D, Osterman WO (1998) Satellite-derived sea surface temperatures: Evaluation of GOES-8 and GOES-9 multispectral imager retrieval accuracy. J Atmos Oceanic Technol 15:788–834 May D, Parmeter MM, Olszewski DS McKenzie BD (1998). Operational processing of satellite sea surface temperature retrievals at the Naval Oceanographic Office. Bull Am Meteor Soc 79:397–407 Merchant CJ, Embury O, Le Borgne P, Bellec B (2006) Saharan dust in nighttime thermal imagery: detection and reduction of related biases in retrieved sea surface temperature. Rem Sens Env 104(1):15–30
4â•… Ocean Data Quality Control
121
Merchant CJ, Le Borgne P, Marsouin A, Roquet H (2008) Optimal estimation of sea surface temperature from split-window observations. Rem Sens Env 112(5):2469–2484 Merchant CJ, Le Borgne P, Roquet H, Marsouin A (2009) Sea surface temperature from a geostationary satellite by optimal estimation. Rem Sens Env 113(2):445–457 Smith N (2003) Sixth session of the global ocean observing system steering committee (GSC-VI): GODAE report. IOC-WMO-UNEP/I-GOOS-VI/17
Chapter 5
Observing System Design and Assessment Peter R. Oke and Terence J. O’Kane
Abstract╇ The use of models and data assimilation tools to aid the design and assessment of ocean observing systems is increasing. The most commonly used technique for evaluating the relative importance of existing observations is Observing System Experiments (OSEs), and Observing System Simulation Experiments (OSSEs). OSEs are useful for looking back, to evaluate the relative importance of existing of past observational components, while OSSEs are useful for looking forward, to evaluate the potential impact of future observational components. Other methods are useful for looking at the present, and are therefore most useful for adaptive sampling programs. These include analysis self-sensitivities, and a range of ensemble-based and adjoint-based techniques, including breeding, adjoint sensitivity, and singular vectors. In this chapter, the concepts for observing system design and assessment are introduced. A variety of different methods are then described, including examples of oceanographic applications of each method.
5.1╅Introduction The use of models and data assimilation tools to aid the design of observing systems has a long history in numerical weather prediction (NWP; e.g., Kuo et€ al. 1998; Bishop et€al. 2001, 2003) and is gaining momentum in the ocean modelling community (e.g., Oke et€al. 2009). Methods for observing system design and assessment range from basic analysis of models to assess de-correlation length- and time-scales, signal-to-noise ratios, and covariance of different variables and different regions. Classical model-based approaches to observing system design and assessment involve observing system simulation experiments (OSSEs) and observing system experiments (OSEs). More sophisticated methods have emerged as a P. R. Oke () CAWCR, CSIRO Marine and Atmospheric Research and Wealth from Oceans National Research Flagship, Hobart, TAS 7001, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_5, ©Â€Springer Science+Business Media B.V. 2011
123
124
P. R. Oke and T. J. O’Kane
result of advances in data assimilation methodology, and there are now a suite of ensemble-based and adjoint-based techniques for designing observing systems and evaluating the impact of observations on assimilating models. Observing system design and assessment has a long history in NWP. Most NWP applications relate to adaptive sampling. Adaptive sampling is the problem of identifying where additional observations should be made to better initialise a forecast. Typical examples of adaptive sampling programs in NWP relate to the prediction of extreme weather, like hurricanes (e.g., Gelaro et€al. 1999). The idea is that if additional observations are made where an instability is developing, or is likely to develop, then those observations can be used to better initialise an NWP forecast and therefore improve the skill of that forecast. Adaptive sampling in NWP arguably began in 1947, when the hurricane reconnaissance program was established to observe the location and intensity of hurricanes. In 1982, NOAA’s Hurricane Research Division began research flights around hurricanes to improve the initialisation of NWP forecasts. They found that the error in the forecast tracks of hurricanes reduced by 25% as a direct result of their adaptive sampling program. In 2003, the World Meteorological Organisation (WMO) initiated a program called THe Observing system Research and Predictability EXperiment (THORPEX). THORPEX was established with the intent to improve the accuracy of NWP forecasts of high-impact weather. Within THORPEX, the data assimilation and observation strategy working group was established to assess the impact of observations and various targeting methods to provide guidance for observation campaigns and for the configuration of the global observing system. For an excellent summary of THORPEX activities and results, the reader is referred to Rabier et€al. (2008). Ocean data assimilation capabilities have progressed rapidly since the beginning of the Global Ocean Data Assimilation Experiment (GODAE; www.godae.org/). A suite of analysis and forecast systems are now used routinely for operational and research applications. All GODAE forecast and analysis systems are underpinned by the Global Ocean Observing System (GOOS; www.ioc-goos.org) that is comprised of satellite altimetry, satellite sea surface temperature (SST) programs, delivered through the GODAE High Resolution SST effort (GHRSST; www.ghrsst-pp.org), and in situ measurements from the Argo program (Argo Science Team 1998), the tropical moored buoy (McPhaden et€al. 1998), surface drifting buoys (www.aoml. noaa.gov/phod/dac), expendable bathythermograph (XBT; www.jcommops.org/ soopip/; www.hrx.ucsd.edu) and tide gauge networks. Each of these observation programs are expensive and require a significant international effort to implement, maintain, process, and disseminate. Careful design and assessment of the GOOS is therefore warranted. Observing system design and assessment activities in the oceanographic community are becoming more common. One of the key challenges for the oceanographic community is to adequately combine the efforts of researchers operating the climate domain, under CLIVAR (www.clivar.com; Heimbach et€al. 2010), and those operating in the short-term forecasting domain, under GODAE (www.godae.org/ OSSE-OSE-home.html; Oke et€al. 2009, 2010). CLIVAR activities tend to focus on climate monitoring and ocean state estimation, while GODAE activities tend to
5â•… Observing System Design and Assessment
125
focus on mesoscale variability and short-range forecasting. Observational requirements for these different applications are likely to be quite different. In this chapter, the concepts of observing system design and assessment are introduced, followed by a description of commonly used methods. The description of each method is intended to be practical, with less focus on theory and more focus on how things are actually done. For each method that is discussed, an oceanographic example is included, where possible. The chapter concludes with a short summary.
5.2╅Concepts for Observing System Design and Assessment Before undertaking any activity that relates to observing system design and assessment, there are several key questions that need to be addressed. These questions relate to the motivation for establishing an observing system, practical limitations, and how the observations will be used. The motivation for establishing an observing system is obviously important. What is it that the observing system is intended to monitor? This might be, for example, the heat content in a specific region, the volume transport of a current system, the variability of the thermocline depth, and so on. An observing system that is optimised to monitor a specific aspect of the ocean circulation is unlikely to be optimal for monitoring all other aspects of the circulation. For example, an observing system that is optimised for initialising a seasonal forecast system that seeks to predict the onset of El Nino will resolve dynamical features that vary on time-scales of El Nino like tropical instability waves, and is likely to be quite different to an observing system that is optimised to constrain an eddy-resolving ocean model that will resolve dynamical features that vary on shorter time-scales. So, the motivation for the observing system should be clear, and where the intended use of the observing system is broad, the optimisation strategy should attempt to reflect this as much as possible. An understanding of what observations are feasible is important. This is likely to be dictated by budget, technology, and convenience. Deployment and maintenance of observations is usually expensive, so a well-design array that is easily deployed and maintained (e.g., with moorings along shipping lanes) may be essential. The budget may provide guidance on the number and types of instruments that can be considered (e.g., number and type of moorings, gliders, Argo floats, drifting buoys, etc). Many studies begin with a specification that, for example, the observation array may consist of up to 10 moorings that each measure temperature and velocity between the surface and 300€ m depth; and ask the question, where should those moorings be deployed? The question of how the observations will be used is difficult because in most cases there are likely to be multiple users, each processing the observations using different methods. For example, observations might be assimilated into a number of models using different assimilation methods; or observations might be gridded using a variety of techniques. It is typical, to assume that a specific analysis or assimilation system will be used to objectively map the observations. In this case,
126
P. R. Oke and T. J. O’Kane
it is important to be clear about the characteristics and limitations of the particular analysis tool of choice. A better approach is to use a multi-system (e.g., multimodel) approach, where several systems are used to evaluate different observation arrays. This is the aspiration of many of the activities under GODAE OceanView (see www.godae.org/OSSE-OSE-home.html). The density of observations required to monitor a given process is largely dictated by the de-correlation length-scales of the fields that are to be observed. This characteristic determines how far apart observations can be made before important features are missed. Similarly, de-correlation time-scales determine how frequency observations should be made. The use of models to determine length- and timescales is often fraught with difficulty, because sub-grid-scale parameterisations within models largely determine these scales, and those parameterisations are generally inaccurate and are sensitive to many subjective choices made by the model developers (e.g., O’Kane and Frederiksen 2008a). Some more subtle characteristics also become important for the design of observing systems. The co-variability of the ocean is critical. Are there locations or quantities that are particularly indicative of the entire system that is to be observed? That is, is there a specific location that is the pulse of the region of interest? The Southern Oscillation Index (SOI) is a good example of this. The SOI is calculated from variations in the air pressure difference between Tahiti and Darwin. Periods of sustained negative SOI usually correspond to El Nino events that are characterised by warming in the central tropical Pacific Ocean, a decrease in the trade winds, and reduced rainfall over much of Australia. An example of how a model can be used to identify the pulse of the ocean is presented in Fig.€5.1, showing two examples of correlation fields from an ensemble1.0
18°N
0.6
EQ 9°S
0.4 0.2
a
0.0
18°N
– 0.2 – 0.4
9°N
Fig. 5.1↜渀 Examples of the ensemble-based correlation between sea-level at a reference location, denoted by the star, and sea-level in the surrounding region. (Adapted from Sakov and Oke 2008)
– 0.6 EQ 9°S
– 0.8 – 1.0
b 50°E
65°E
80°E
95°E
Ensemble-based Correlation
0.8
9°N
5â•… Observing System Design and Assessment
127
based data assimilation system (Sakov and Oke 2008). Ensemble-based assimilation systems use an ensemble of anomalies (also called perturbations or modes) to implicitly represent the system’s background error covariance. The background error covariance determines how an observation-model difference is projected onto the model state during the assimilation step. So the ensemble-based correlation (or covariance) between an observable variable at a reference location and the rest of the model state represents the effective foot-print of an observation at that reference location. The examples presented in Fig.€5.1 shows the ensemble-based correlation between sea-level at different reference locations and sea-level in the surrounding region. The regions where the amplitudes of these correlations are large correspond to regions where an observation from that reference location will have a significant impact. The first example, shown in Fig.€5.1a, indicates that an observation in the eastern Indian Ocean, off Java, is well correlated with sea level along the coast and over a very broad region. The spatial structure of the correlation map shows a dipole structure. This structure is observed in several previous studies (Chambers et€ al. 1999; Feng et€al. 2001; Wijffels and Meyers 2004; Rao and Behera 2005). Also, the footprint of the positively correlated region reflects Rossby–Kelvin wave patterns. This indicates that observations offshore of Indonesia are likely to be particularly useful for constraining a data assimilating model that uses an ensemble like that described by Sakov and Oke (2008). The second example, shown in Fig.€5.1b, indicates that sea level off Somalia is relatively uncorrelated with sea level across the tropical Indian Ocean. The region off Somalia is dominated by mesoscale variability that spawns from the energetic and highly variable boundary currents in this region. While the mesoscale variability in this region is well organized (Schott and McCreary 2001), its variability is apparently somewhat chaotic and is characterized by short de-correlation length scales. This suggests that, while many observations may be required in the northwest tropical Indian Ocean to adequately represent the variability there, an observation in this region will not impose a significant constraint on a data assimilating model that uses the ensemble described by Sakov and Oke (2008). Like any optimisation problem, observing system design and assessment ultimately involves the quantification of how good an observing system is. Consequently, the most important question for any observing system design or assessment activity is: what is it we seek to minimise? This is quantified by a cost function, metric, or diagnostic. The possible metrics that could be minimised are virtually unlimited. We might seek to minimise the analysis error variance of some quantity (e.g., temperature, salinity, velocity, thermocline depth) for some region (e.g., tropical Pacific Ocean, North Atlantic, etc.). We might seek to minimise the forecast error of some quantity in a given region. Or perhaps we seek to minimise uncertainty of an integrated quantity, such as the transport through a strait. We may even wish to minimise several quantities (e.g., temperature and velocity error), which may require some sort of normalization, or weighting, that reflects the variance of different variables or relative importance for a given application. In every case, we must define a cost function, or metric, that we seek to minimise. The results will often depend heavily on this cost function (e.g., Sakov and Oke 2008).
128
P. R. Oke and T. J. O’Kane
5.3╅Methods and Examples Commonly used techniques for evaluating the benefits of different observation types and arrays include Observing System Experiments (OSEs), Observing System Simulation Experiments (OSSEs), analysis self-sensitivities, ensemble-based methods, and adjoint-based methods. All of these methods require some form of data assimilation. Of these methods, OSEs, OSSEs and analysis self-sensitivities can all be applied regardless of the assimilation technique used. By contrast ensemblebased and adjoint-based methods require specific tools for their application. Details of all of these methods, including examples, follow. Other methods, not described in detail here, that have also been applied to observing system design and assessment studies include genetic algorithms (Gallagher et€al. 1991). Applications of genetic algorithms to oceanic applications include the optimisation of surface drifter deployments (Hernandez et€al. 1995), and acoustic tomography arrays (Barth 1992).
5.3.1 Observing System Experiments—OSEs The most commonly used method for employing assimilating models to assess observing systems is OSEs. OSEs generally involve the systematic denial, or withholding, of different observation types from a data assimilating model in order to assess the degradation in quality of a forecast or analysis when that observation type is not used. Importantly, the impact of each observation type may strongly depend on the details of the model into which they are assimilated, the method of assimilation, and the errors assumed at the assimilation step. It is therefore instructive to consider results from a range of different models and applications in an attempt to identify the robust results that are common to a number of different systems. Results from OSEs can sometimes be difficult to interpret. Suppose four different observation types, from different platforms (e.g., Argo floats, satellite SST, altimetry, moorings) are typically assimilated. We might expect that there is some redundancy between these data types. For example, some of the information contained in an Argo profile is represented by altimetry (e.g., Guinehut et€al. 2004). Similarly, some data in SST fields is also measured by Argo floats. If we with-hold Argo data from an OSE, we might expect altimetry and SST to become more important, so the true value of an observation, or observation type, is difficult to really assess with OSEs. In some cases, subtle details of the model/assimilation system can complicate the interpretation of OSEs. For example, Vidard et€al. (2007) report a case when they with-held observations in the tropics. They found that with-holding this data degraded the circulation at high latitudes. This was puzzling. They traced this link back to the quality control system of their assimilation. An important step for any quality control system is a comparison with the model’s background field. If observations differ significantly from the background field, they may be flagged as bad,
5â•… Observing System Design and Assessment
129
and automatically with-held from assimilation. Vidard et€al. (2007) found that when observations in the tropics were with-held, the system’s background field changed enough to influence the quality control system’s decisions. This led to data at higher latitudes being flagged as bad, ultimately degrading the model fields at higher latitudes. Several other instances of quality control decisions influencing OSE results in similar ways have been reported in the literature (e.g., Bouttier and Kelly 2006; Tremolet 2008). Subtleties like these can, in some cases, make OSEs difficult to interpret. OSEs are usually conducted for a past period of time—for example, the last 3 years, or the time period when four satellite altimeters were operating. While this is very instructive, the GOOS is constantly changing (e.g., Fig.€5.2). The number and distribution of Argo floats changes as new floats are deployed and old floats expire. New altimeter missions are launched and old missions end—and the sampling
60°N
2001
30°N EQ 30°S 60°S 60°N 30°N
2004
EQ 30°S 60°S 60°N
2007
30°N EQ 30°S
Fig. 5.2↜渀 Observations during January of 2001, 2004, 2007, and 2010; green, blue, and yellow dots denote Argo floats, XBT/CTD profiles, and buoys respectively. (Images sourced from www. coriolis.eu.org in February 2010)
60°S 60°N
2010
30°N EQ 30°S 60°S
60°E
120°E
180°E
120°W
60°W
130
P. R. Oke and T. J. O’Kane
strategies of different altimeter missions are often different. This means that OSEs can become outdated. For example, using a seasonal forecast system, Vidard et€al. (2007) and Balmaseda et€al. (2007) perform a series of OSEs to evaluate the impact of Argo, XBT, and tropical moorings on forecast skill. Vidard et€al. (2007) perform OSEs for the period 1993–2003 and Balmaseda perform OSEs for the period 2001– 2006. So, for most of Vidard et al.’s OSEs, Argo coverage is sparse, while for most of Balmaseda et al.’s OSEs, Argo is substantial. As a result, Vidard et€al. report only faint praise for Argo, but note that it was probably too early to be sure. By contrast, Balmaseda et€al. conclude that Argo is instrumental in iniatialising their forecast system—particularly for salinity. Another limitation of OSEs is the significant computational and human resources required to undertake, analyse, and interpret them. Consider the study of Oke and Schiller (2007), for example. They conducted a series of 6-month model runs including an experiment with no assimilation, an experiment with all data assimilated, plus experiments with each observation type (Argo, SST, and altimeter) with-held. Additional experiments could include those with one altimeter, two altimeters, three altimeters, or four altimeters; experiments with different SST products assimilated; experiments with only a sub-set of Argo profiles, for example every other Argo profile. Such a series of OSEs equates to a significant amount of computation, and a large amount of data that requires processing, analysis, and interpretation. This is not always achievable, especially when a high resolution model is used. Evaluation of OSEs is always a challenge. For any series of OSEs, the best experiment, by which all others are typically compared, is always the run that assimilates all observations. Evaluation of this run is therefore problematic, as there is usually no independent set of observations that can be used to evaluate this run. An example of a series of OSEs, designed to evaluate the relative importance of altimetry, Argo, and SST for constraining an eddy-resolving ocean model, is described by Oke and Schiller (2007). Using a 1/10° resolution ocean general circulation model and an ensemble optimal interpolation data assimilation system (Oke et€al. 2008), they systematically with-hold altimetry (denoted ALTIM), Argo, and SST from a reanalysis system for the period December 2005 to May 2006. The impact of with-holding each data type is illustrated in Fig.€5.3, showing the residuals between reanalysed SLA and along-track SLA. The residual maps quantify the difference between observed and reanalysed SLA for each OSE. Reanalysed SLA is compared to along-track SLA from all available altimeters (Jason, Envisat, and GFO). The results in Fig.€ 5.3 indicate that when only Argo and SST are assimilated the SLA residuals are much smaller than the OSE that assimilated no observations, denoted NONE in Fig.€5.3. This indicates that some of the information in altimetry is also represented by the SST and in situ T and S observations. This is expected, based on the well understood dynamical relationship between SLA and sub-surface T and S, but it also demonstrates the power of the multivariate EnOI scheme that is used by Oke and Schiller (2007). The SLA residuals are noticeably smaller when altimetry is assimilated, particularly in the regions of energetic mesoscale variability like the Tasman Sea, along the path of the Antarctic Circumpolar Current and off
5â•… Observing System Design and Assessment
131
EQ 20°S
NONE
ALTIM + SST
Argo + SST
ALTIM + Argo
Obs. S. Dev.
40°S 60°S EQ 20°S
ALTIM + Argo + SST
40°S 60°S
100°E
135°E
170°E 100°E 0
8
135°E
16 RMS residual (cm)
170°E
100°E
135°E
170°E
24
Fig. 5.3↜渀 Root-mean-squared residual between modelled and observed sea-level anomaly for different OSEs. (Adapted from Oke and Schiller 2007)
Western Australia, where the Leeuwin Current frequently sheds eddies (Fig.€5.3). This suggests that while SST and Argo represent the broad-scale SLA features, they do not adequately resolve the details of the mesoscale.
5.3.2 Observing System Simulation Experiments—OSSEs Another commonly used technique for evaluating the potential benefit of different observing systems is OSSEs. OSSEs often involve some sort of twin experiment, where synthetic observations, usually extracted from a model, are assimilated into an alternative model or gridded using an observation-based analysis system. OSSEs are commonly used to assess the impact of some hypothetical array of observations that may not exist yet. This means that these methods can be used to contribute to the design of future observing systems, quantifying their possible impacts and limitations. OSSEs have been employed to support the design of oceanic observing systems since before the altimeter era. For example, the Berry and Marshall (1989), Holland and Malanotte-Rizzoli (1989), performed OSSEs to support the assessment of designs for the early altimeter missions. Similarly, OSSEs were conducted to support the design and assessment of the TAO array in the tropical Pacific Ocean (e.g., Miller 1990) and the PIRATA array in the tropical Atlantic Ocean (e.g., Hackert et€al. 1998).
132
P. R. Oke and T. J. O’Kane
Several good examples of OSSEs were conducted during the planning of the tropical Indian Ocean mooring array (CLIVAR-GOOS Indian Ocean Panel et€ al. (2006). These OSSEs were conducted by several different groups, using different models and different techniques. The results from these studies contributed to discussions during the planning of this mooring array. Vecchi and Harrison (2007) presented results from a series of OSSEs using a high resolution ocean model and an adjoint-based assimilation system to evaluate the ability of an integrated observing system, including Argo observations, XBT lines, and the proposed mooring array to monitor intraseasonal and interannual variability. Ballabrera-Poy et€ al. (2007) used a reduced-order Kalman filter to objectively determine an array for mapping sea surface height and SST. Oke and Schiller (2007) used an approach based on empirical orthogonal functions (EOFs) to assess the proposed mooring array’s ability to monitor intraseasonal and interannual variability. Vecchi and Harrison (2007) concluded that in conjunction with the integrated observing system, the proposed mooring array should be capable of resolving intraseasonal and interannual variability. Both Ballabrera-Poy et€al. (2007) and Oke and Schiller (2007) argued that the proposed array may oversample the region within a few degrees of the equator. These studies also suggested that key regions for monitoring seasonal to interannual variability are south of 8°S, at about 4°–5° from the equator and along the coast of Indonesia. These regions correspond to the locations of the maximum amplitude of seasonal Rossby waves (Masumoto and Meyers 1998; Schouten et€ al. 2002), equatorial Rossby waves, and strong Indian Ocean dipole events (Murtugudde et€al. 2000), respectively. An example of the above-mentioned OSSEs is presented in Fig.€5.4, showing the standard deviation of the depth of the 20°C isotherm (D20) from a model, along with the root-mean-squared error of D20 in two OSSEs. Each OSSE uses output from 18-years of a model run. The first 9-years are used to train the EOF-based analysis system that is described by Oke and Schiller (2007), and the last 9-years is used for cross-validation, and to evaluate how well different mooring arrays resolve variability of D20. For each OSSE, the last 9-years of the model run are sampled at mooring locations; those observations are perturbed with white noise according to their assumed errors; the observations are analysed, and the errors of the analysed D20 fields are assessed. Figure€5.4 indicates that the proposed array resolves the variability of D20 very well near the equator, where the root-mean-squared errors are small, but poorly south of 10°S, where the errors are relatively large. An alternative mooring array is also tested by Oke and Schiller (2007). The alternative array is generated objectively, by maximising the projection of observations onto an ensemble that is used for assimilation. For a detailed description of the method, the reader is referred to Oke and Schiller (2007). The alternative array, presented in Fig.€5.4c, has fewer moorings close to the equator and in the northern Indian Ocean, and more moorings between 10°S and 15°S. Variability of D20 is still well resolved by the alternative array near the equator, but owing to the additional moorings to the south, the variability of D20 is better resolved there. The latitudes of high D20 variability to the south (10–15°S) correspond to the maximum amplitude of seasonal Rossby waves (Masumoto and Meyers 1998; Schouten et€al. 2002). The study by
5â•… Observing System Design and Assessment
133
15°N 10°N 5°N EQ 5°S 10°S 15°S
a
15°N 10°N 5°N EQ 5°S 10°S 15°S 15°N
b
10°N 5°N EQ 5°S 10°S 15°S
c 40°E
55°E
70°E
85°E
100°E
115°E
Fig. 5.4↜渀 a Standard deviation of the depth of the 20°C isotherm and the root-mean-squared error for b the proposed Indian Ocean mooring array and c an optimized mooring array for a series of OSSEs; contour intervals are 2.5€m. (Adapted from Oke and Schiller 2007)
Oke and Schiller (2007) concluded by suggesting that additional moorings in those latitudes are worth considering. OSSEs can be very instructive for assessing the potential impact of different observing systems. However, they have several limitations. It could be fair to say that OSSEs, in the form of twin experiments are doomed to succeed—particularly if the same model is used to produce the synthetic observations, as the model used for
134
P. R. Oke and T. J. O’Kane
assimilation. In this case, the dynamics of the model and observations are perfectly compatible. As a result, some OSSEs using twin experiments report very low errors in assimilating model runs. In some cases, the errors are so low, and the results so optimistic, that the conclusions of such studies must be regarded with suspicion. The relevance of any series of OSSEs ultimately depends on the assumptions made in configuring the OSSEs. In all cases, assumptions are made about the dynamics and the data assimilation methodology. It is implicitly assumed that the models capture the dynamics correctly and the observations are assimilated appropriately. Assumptions are made about the observation errors, and about model errors. In most cases, synthetic observations are corrupted by noise—and the noise is almost always assumed to be white in time and unbiased. It is also common to assume that there will be no data outages, and that data are all available at the time of assimilation. In the operational environment, this final assumption is rarely true. Many OSSE studies employ methods that do not involve twin experiments. For example, Brassington and Divakaran (2009) analyse the theoretical impact of seasurface salinity observations on an ensemble-based data assimilation system by examining various characteristics of the ensemble. Schiller et€al. (2004) examine modelled fields to quantify the likely signal-to-noise ratios of different sampling strategies for the Argo program. OSSEs can be a very instructive tool for evaluating the potential value of future observing systems. However, the assumptions made by OSSEs are often optimistic, and the results from OSSEs are therefore often optimistic—and should be regarded as indicative only, and perhaps qualitative in most cases.
5.3.3 Analysis Self-Sensitivity In general, regardless of the method, a data assimilation system combines a background field (of either 2-, 3- or 4-dimensions) with a set of observations, yielding an analysis. Different assimilation methods do this in different ways. But for all methods, there exists a so-called analysis self-sensitivity. The analysis self-sensitivity quantifies the importance of each individual observation for a given analysis. Consider a couple of cases. Suppose we can change a given observation and the analysis does not change. In this case, we can say that the analysis is not sensitive to that observation, and conclude that it is unimportant. This may occur if the observation has a large error, or is in a region of dense observations—so it is redundant. Conversely, consider a case where a change to a given observation results in a significant change to the analysis. In this case, we can say that the analysis is sensitive to that observation, and conclude that it is important. This may occur if the observation is very accurate, or is in a datasparse region. The sensitivity referred to above is called the analysis self-sensitivity. In practice, self-sensitivities are diagnosed by the so-called influence matrix (Cardinali et€al. 2004). The influence matrix is simply a subset of the Kalman gain, K. The Kalman gain is like a regression matrix, mapping each element of the background innovation (difference between a background field and observations) onto
5â•… Observing System Design and Assessment
135
the full model state. The influence matrix is simply HK, where H, is an operator that interpolates from model-space to observation-space (often just linear interpolation). The matrix HK is square, with dimension p by p, where p is the number of observations assimilated. The diagonal elements of HK are the analysis selfsensitivities—they map the background innovation from the observation location to itself. Cardinali et€al. (2004) and Chapnik et€al. (2006) provide a practical recipe for diagnosing analysis self-sensitivities from any assimilation system where explicit calculation of HK is not feasible—regardless of the assimilation method. Briefly, the method involves the following steps: 1. Perform a standard analysis by assimilating observations d; 2. Perturb the assimilated observations (d╛╛d*) according to their expected error (from the diagonal elements of R, the observation error covariance matrix); 3. Perform another analysis by assimilating the perturbed observations; and 4. Compute the self-sensitivities HKii: HKii = (di∗ − di )(Hai∗ − Hai )/Rii ,
where a and a* are analyses produced using unperturbed and perturbed observations respectively. The minimum calculation to estimate the self-sensitivities in this way is a second analysis. However, this calculation is subject to sampling error, due to the random nature of perturbing the observations, so multiple perturbed analyses should be calculated in practice, to obtain robust estimates of the true self-sensitivities. The diagonals of the influence matrix can be analysed, or the partial trace of HK can be averaged for different regions, different variables, and so on. With an estimate of the self-sensitivities at hand, it is common to diagnose the so-called degrees of freedom of signal (DFS) and the information content (IC) for different sub-sets of observations. The DFS provide an indication of how many truly independent observations are present in a given sub-set of observations. At most, the DFS is the same as the number of observations. In this case, the IC is 100% and there are no redundant observations. Conversely, if the DFS is much less than the number of observations, the IC of that set of observations is low. In this case, the IC may be small and there is significant redundancy in the observations. An example of the IC and DFS for different observation types using the Bluelink reanalysis system (Oke et€al. 2008) is given in Fig.€5.5. Based on these results, it appears that both altimetry and SST observations are well used by the Bluelink system. However, information from the Argo data is either not extracted by the Bluelink system in an optimal way, or is somewhat redundant—possibly well represented by the other assimilated observations. At this stage of development, the former explanation seems most likely. By producing these, and other, diagnostics from a number of GODAE systems, it is anticipated that the true value of all observations for GODAE systems can be routinely monitored and quantified. In turn, these evaluations could be fed back to the broader community for consideration. In addition to providing a quantitative indication of the importance of each observation, and each observation type, for a given analysis, analysis self-sensitivities can be instructive for tuning assimilation and forecast systems. The goal of every
P. R. Oke and T. J. O’Kane
100
IC (%) DFS # Obs
IC (%)
80
9000 7200
60
5400
40
3600
20
1800
0
DFS and # Obs
136
S go Ar
T go Ar
T SS
AL TI M
0
Fig. 5.5↜渀 The preliminary estimates of the information content (IC;%), degrees of freedom of signal (DFS) and the number of assimilated super-observations (# Obs) for the Bluelink reanalysis system in the region 90–180°E, 60°S-equator, computed for 1 January 2006. The scale for the IC is to the left and the scale for the DFS and # Obs is to the right
assimilation system is to extract as much relevant information from every observation as possible. That is, to maximise the IC from the above-mentioned analysis. The type of diagnostic described here can contribute to this process. Analysis self-sensitivity is a relatively inexpensive to perform and may be feasible for routine application to operational forecast systems. The latter point means that calculations could be performed on the modern-day GOOS. Limitations of analysis self-sensitivities however, include the fact that they are relevant only to analysis fields—not the forecast fields. Finally, self-sensitivities also depend on error estimates used by the assimilation or analysis system.
5.3.4 Ensemble-Based Methods A variety of ensemble-based methods can be readily used for observing system design and assessment. These include the diagnosis of ensemble-based covariance fields, of which Fig.€5.1 is an example, the objective ranking of the importance of observations with regard to their potential impact to minimise a system’s analysis error variance, and diagnosis of bred vectors. A description and examples of these follow. Some good references for ensemble-based observing system design and assessment activities include Tracton and Kalnay (1993); Houtekamer and Derome (1995); Toth and Kalnay (1997); Bishop et€al. (2001, 2003); and Wang and Bishop (2003). An example of a series of ensemble-based correlation fields between sea-level at time tâ•›=â•›0 days, and sea-level in the surrounding region 4-days earlier (tâ•›=â•›−4 days) and 4-days later (tâ•›=â•›+4 days) in the open ocean, south-west of New Caledonia is shown in Fig.€ 5.6. The correlation fields provide insight into the underlying dynamics, the spatial length-scales, and the temporal length-scales of sea-level. For this example, a modified version of the 120-member stationary ensemble that is
5â•… Observing System Design and Assessment 18°S
t = – 4 days
137 t = + 4 days
t = 0 days
24°S
30°S
b
a 156°E
162°E
c 156°E
162°E
–1.0 – 0.8 – 0.6 – 0.4 – 0.2 0.0 0.2 correlation
0.4
156°E 0.6
0.8
162°E 1.0
Fig. 5.6↜渀 An example of four-dimensional ensemble-based correlation fields showing the spatiotemporal influence of a sea-level observation in the open ocean, south-west of New Caledonia. Each panel shows the ensemble-based correlations between sea-level at tâ•›=â•›0 days and sea-level in the surrounding region for time-lags of a −4 days, b 0 days, and c +4 days
used by the Bluelink forecast and reanalysis system (Brassington et€al. 2007; Oke et€al. 2005, 2008) is used. It is evident that in this region a dominant dynamical process is the westward propagation of sea-level anomalies, probably characteristic of Rossby waves. The ensemble-based correlations indicate that the length-scales in this region are fairly short, with the influence of sea-level limited to within a few hundred kilometers of an observation. However, the time-scale seems to be quite long—the lagged correlations, for tâ•›=â•›−4 and 4 days, are not very much less than the zero-lag correlations, for tâ•›=â•›0 days (Fig.€5.6). We therefore expect that an observation at some point in time is likely to be representative of the circulation for some time into the future and into the past. These factors may influence discussions on the appropriate spatial density and temporal sampling of observing systems in this region. Although the example presented in Fig.€5.6 uses a stationary ensemble, and is therefore appropriate for the design and assessment of long-term monitoring programs, a time-evolving ensemble from an ensemble Kalman Filter system (e.g., Evensen 2003) that reflects the time- and state-dependent background field errors (so-called errors of the day; Corazza et€al. 2003) could equally be used for adaptive sampling programs—where we might seek to identify good locations for imminent deployments of instruments, like gliders or profiling floats. Ensemble-based methods for optimal array design are increasingly being used for NWP systems (e.g., Bishop et€al. 2001). These methods are based on ensemble square root filter theory (e.g., Tippett et€ al. 2003) and allow one to handle large systems in cases when explicit manipulation of the background error covariance matrix is not feasible. Most of the studies on the ensemble-based optimal array design consider the problem of adaptive sampling and targeted observation, aimed at improving the model’s forecast at a given time (e.g., Bishop et€al. 2001; Langland 2005; Khare and Anderson 2006).
138 Fig. 5.7↜渀 Schematic diagram depicting the serial calculation of an optimal observation array. The dashed arrows represent the serial identification of targeted observations and the ensemble updates that reduce the ensemble’s variance given those targeted observation. (Adapted from Sakov and Oke 2008)
P. R. Oke and T. J. O’Kane Initial ensemble – representing the system’s background error covariance before assimilation
Updated ensemble – representing the system’s background error covariance after assimilation of available observations
Identification of the next best targetted observation
Ensemble update – to reflect the impact of the latest targetted observation
The main steps in the ensemble-based objective design of an observation array are represented schematically in Fig.€5.7. The first step is the construction of an initial ensemble that represents the system’s background error covariance before any observations are assimilated. Such an ensemble might be associated with some variant of the ensemble Kalman Filter (Evensen 2003). Given an ensemble that implicitly represents the system’s background error covariance, and an array of observations of known error variance, ensemble square root theory provides an efficient framework for updating, or transforming, the ensemble so that its updated error variance matches the theoretical analysis error variance after those observations are assimilated (Bishop et€al. 2001). There are several ways of implementing this transformation (see Tippett et€al. 2003), all of which are equivalent, but the most computationally efficient transformation is that of the ensemble transform Kalman filter (ETKF; Bishop et€al. 2001) and specifically the serial implementation of the ETKF. So, the second step is to update the ensemble to represent the system’s error covariance after assimilation of all available observations (Fig.€5.7). The third step is to identify the next best targeted observation. That is, the observation that transforms the ensemble to yield the ensemble with the smallest analysis error variance. This targeted observation is identified by explicitly transforming the ensemble for all possible observations and identifying the observation that minimises the ensemble’s analysis error variance. This is a brute force calculation—however, the update from a single observation is inexpensive, so this approach is generally feasible, even for systems with
5â•… Observing System Design and Assessment
139
a large state dimension. Once the latest targeted observation has been identified, the ensemble is updated and the process of identifying the next best targeted observation is repeated, until the number of targeted observations has been reached. The most important step in the ensemble-based approach described above is the determination of what the targeted observations are intended to minimise. In practice, the ensemble includes several different variables (e.g., temperature, salinity, velocity etc.). The identification of the next best targetted observation can be performed so that it minimises a specific aspect of the analysis error. For example, it might minimise the analysis errors of temperature in a specific target region, or the analysis errors of mixed layer depth, or the volume transport through a strait or passage. This criterion may have a significant impact on the objectively designed observation array (e.g., Sakov and Oke 2008). Careful determination of what is to be minimised is important. For this to be achieved, it is important to be very clear about the purpose, or motivation, of the observation array. An example of an ensemble-based objective observing system design, from Sakov and Oke (2008), is presented in Fig.€5.8. This example addresses the design of the tropical Indian Ocean mooring array. It is assumed that the purpose of this array is to minimise the analysis error variance of Intraseasonal Mixed Layer Depth (IMLD). Figure€5.8 shows the error variance of IMLD before and after assimilation, for two different models and for three different mooring arrays, and assumes that no other observations are available (i.e., no Argo, XBT, altimeter data etc.). It is assumed that observations from the mooring array are to be assimilated into a model using an ensemble-based data assimilation system using a stationary ensemble. Two different ensembles are considered, each generated by different model configurations (ACOM2 and ACOM3), with different forcing, and integrated for different periods. Three different options for the mooring array are considered: the proposed mooring array (denoted CG-IOP array), and an optimised array for each model, denoted ACOM2-array and ACOM3-array. In this case, the initial ensemble variance for IMLD is shown, along with the final ensemble variance for IMLD given the different mooring arrays (Fig.€5.8). Sakov and Oke (2008) use different models here in pursuit of more robust results. The numbers overlying the error variance maps in Fig.€5.8 refer to an objective ranking of each observation location—the order in which they were identified by the method depicted in Fig.€5.7. In each case, the mooring array is constrained to a limited number of mooring lines at distinct longitudes—to simplify routine maintenance of the array. Using the ETKF framework, the best mooring line is identified, and then the best observation for that mooring line is derived. So the mooring line with numbers 1–6 is the best mooring line. For each array considered and for both models, the best mooring line is located in the eastern Indian Ocean, between 90°E and 95°E, and the mooring line south of India is also very important, ranked 7–12 (or 7–14) for each scenario considered. These results appear to be robust, and can aid the decision-makers when mooring design and priorities are being made—for example, which mooring line should be deployed first? Breeding is an ensemble technique that seeks to quantify the structures of the fastest-growing dynamical modes of a model. Bred vectors are perturbations to the
ACOM2 ensemble
ACOM3 ensemble
Initial variance
18°N 9°N EQ 9°S
CG-IOP array
18°N 9°N EQ 9°S
ACOM2 array
18°N 9°N EQ 9°S
ACOM3 array
18°N 9°N EQ 9°S
OFAM array
18°N 9°N EQ 9°S 50°E
65°E 0
80°E 6
12
95°E
50°E
65°E
18 24 30 36 42 IMLD Variance (m2)
48
80°E 54
95°E
60
Fig. 5.8↜渀 The variance of the IMLD (↜top row) in ACOM2 (↜left) and ACOM3 (↜right), and the theoretical analysis error variance for each model using the CG-IOP-array (↜2nd row), and the arrays derived using ensembles from ACOM2 (↜3rd row) and ACOM3 (↜4th row), as labelled to the left of each row. The numbers in each panel denote the mooring locations and the ranking of each location (i.e., the locations marked “1” are the best location). (Adapted from Sakov and Oke 2008)
5â•… Observing System Design and Assessment
141
model state that grow rapidly in time. Bred vectors are particularly useful for adaptive sampling, where the errors of the day are used to identify where an instability is most likely to originate. More observations in a region of instability might better constrain a deterministic forecast, resulting in better forecast skill. Breeding was first explored by Toth and Kalnay (1997) for an NWP ensemble prediction system. In practice, bred vectors are generated by first initialising a model with an ensemble of perturbations. Initially, the perturbations are typically simply small-amplitude white noise. The ensemble is integrated for a fixed period of time. The perturbations are periodically rescaled using a global (or regional) scale factor so that they approximate fast-growing errors within an assimilation scheme. The choice of scale factor is important. One of the purposes of breeding is to identify fast-growing instabilities. In some regions, these instabilities will be best represented by sea-level anomalies; in other regions in might be sub-surface temperature, or density. This should be tuned for different regions. However, some atmospheric applications have demonstrated that the choice of rescaling doesn’t significantly influence the bred vectors (e.g., Corazza et€al. 2003). This is in contrast to singular vectors (see below), which are very sensitive to the choice of norm (e.g., Palmer et€al. 1998; Snyder et€al. 1998). In practice, the ensemble perturbations (bred vectors) usually become well-organised, coherent structures that can be interpreted and understood (e.g., instabilities associated with an eddy). This approach readily allows ensembles to be initialised about the analysis from data assimilation that contain, by construction, information about the errors of the day. Thus the bred vectors tend to project strongly onto regions where forecast errors are large. The process of breeding is represented schematically in Fig.€5.9.
3HUWXUEHG IRUHFDVWV ,QLWLDOUDQGRP SHUWXUEDWLRQV
%UHGYHFWRUVDUH UHVFDOHGDQG DGGHGWRWKH XQSHUWXUEHG IRUHFDVW
WLPH 8QSHUWXUEHG IRUHFDVW
5HVFDOLQJLQWHUYDO
Fig. 5.9↜渀 Schematic diagram depicting the generation of bred vectors. An ensemble is initially perturbed with uncorrelated noise. The rescaling parameter must be chosen carefully (e.g., temperature at 250€m depth in key region). After each rescaling interval the ensemble perturbations are rescaled to the same magnitude as the initial perturbations—but bred vectors develop spatially coherent, well-organised structures. Each bred vector is the difference between a perturbed forecast and the unperturbed forecast
142
P. R. Oke and T. J. O’Kane
For an atmospheric example, Houtekamer and Derome (1995) showed that bred vectors produce similar results to singular vectors (described below), but they are much easier to implement (Wei and Frederiksen 2004). Because of its simplicity, breeding is a very versatile approach. Bred vectors have recently been explored by many operational global weather predictions systems (e.g., O’Kane et€al. 2008) using an implementation that is based on the ETKF (e.g., Wang and Bishop 2003; Wei et€al. 2006). The ETKF is a generalisation of breeding, but it is more complex and more computationally expensive. The main difference is that the ETKF orthogonalises the bred vectors and seeks to maximise the ensemble spread. An example of breeding applied to a regional ocean model of the Tasman Sea is presented in Fig.€5.10. For this example, a 4-member ensemble is used, and bred vectors are optimized (rescaled to) amplify temperature anomalies at 250€m depth. The forecast errors for sea-level, computed by comparing with a verifying analysis, are shown in Fig.€5.10a–d along with the 4-member ensemble averaged bred vector overlaid. The individual bred vectors are also contoured in Fig.€5.10e–h. For the period shown here, the forecast error for sea-level is quite large at several locations. The bred vectors are independent of the forecast error; however they project strongly onto the regions where the forecast error is large and spatially coherent. This indicates that the bred vectors are reliably identifying regions of growing instabilities. For the case displayed in Fig.€5.10, the forecast does not pick up the developing instabilities (see 11 March) that, in this case, correspond to a developing cold-core eddy. With regard to adaptive sampling, if this 4-member breeding system is run in parallel with the operational forecast system, the regions of strong growth in the 04-Mar-2008
11-Mar-2008
18-Mar-2008
25-Mar-2008
0.2 0
35°S
–0.2 40°S
a
b
c
d
sea-level (m)
0.4
30°S
–0.4
150°E 155°E 160°E 150°E 155°E 160°E 150°E 155°E 160°E 150°E 155°E 160°E 30°S
35°S
40°S
e
f
g
h
150°E 155°E 160°E 150°E 155°E 160°E 150°E 155°E 160°E 150°E 155°E 160°E
Fig. 5.10↜渀 Examples of a–d forecast error for sea-level (↜colour) and the ensemble averaged bred vector (↜contours) in the Tasman sea; and e–h four bred vectors overlaid. Each bred vector is a different colour
5â•… Observing System Design and Assessment
143
bred vectors might be good candidates for the deployment of additional observations—perhaps in the form of gliders or profiling floats. In this case, the improved initialisation of the forecast in those regions might have better constrained the forecast and improved the forecast skill for this event.
5.3.5 Adjoint-Based Methods A variety of adjoint-based methods can be readily used for observing system design and assessment. These include diagnosis of representers, adjoint sensitivities, and singular vectors. A description and examples of these follow. Some good references for adjoint-based observing system design and assessment activities include Moore and Farrell (1993); Rabier et€al. (1996); Gelaro et€al. (1998); Palmer et€al. (1998); Baker and Daley (2000); Langland and Baker (2004); and Moore et€al. (2009). Representers are analogous to the ensemble-based covariance fields displayed in Figs.€5.1 and 5.6. Representers quantify the temporal and spatial footprints of influence of an observation. Using the system’s tangent linear model to trace the influence of an observation into the future, and its adjoint to trace its influence into the past, an adjoint-based data assimilation system readily approximates the covariance between a given observation (e.g., sea-level at a fixed location) and all other variables at all model grid locations for all time. Representers can help build intuition about how different observation types and locations influence a data assimilating model. An example of the components of a representer, derived from the Advanced Variational Regional Ocean Representer Analyzer (AVRORA) system (Kurapov et€al. 2009), for the coastal ocean is presented in Fig.€5.11. The background field for these calculations corresponds to an idealised two-dimensional wind-driven upwelling scenario (Fig.€5.11a) with characteristics of the upwelling circulation off Oregon, USA. Details of the model configuration and assimilation system are described by Kurapov et€al. (2009). They investigate the structure of representers to better understand the potential impact of assimilating observed sea-level anomalies from altimeters into a coastal ocean model. The representer components shown in Fig.€5.11 quantify the covariance between a hypothetical sea-level observation that is 50€km offshore, and the rest of the model state. The components shown in Fig.€5.11 are for the time of the observation. The full representer includes time, where the influence of the observation extends over both time and space. The fields in Fig.€ 5.11 show how the assimilation system updates the model state when the observed offshore sea-level is lower than the modelled background estimate. The changes introduced by the assimilation are consistent with a strengthening wind-driven upwelling, with stronger upwelling favourable along-shore wind stress, lower sea-level over the shelf, offshore flow near the surface and onshore flow through a bottom boundary layer, an accelerated baroclinic coastal jet, and a temperature (salinity) decrease (increase). The representer fields presented in Fig.€5.11 indicates that offshore sea-level observations from altimetry are suitable
144
P. R. Oke and T. J. O’Kane 0
26.5 26.0
100 potential density & alongshore velocity; C.I. = 0.05 m/s
along-shore wind stress (N/m3)
150 200 0.0
a
25.5 25.0
along-shore wind stress
sea-level
– 0.04 – 0.08
0.0 sea-level (m)
Depth (m)
50
potential density (kg/m3)
27.0
– 0.1
b
c
– 0.2
0
depth (m)
50 100 150 200
across-shore velocity; C.I. = 0.02 m/s
d
Temperature; C.I. = 0.05 Deg C
e
0
depth (m)
50 100 150 200 50
f 40
along-shore g velocity; C.I. = 0.02 m/s 30 20 10 0 40 offshore distance (km)
30
Salinity; C.I. = 0.05 psu 20 10 0
Fig. 5.11↜渀 The components of a representer in a cross-shore section for an idealised two-dimensional wind-driven upwelling scenario (panel a shows the background field), showing the covariance at the time of the observation (zero time-lag) between sea-level 20€ km from shore and b along-shore wind-stress, c sea-level, d across-shore velocity, e temperature, f along-shore velocity, and g salinity. Contour intervals are provided in the titles for each plot. The contour intervals (C.I.) for a, d–g are marked. (Adapted from Kurapov et€al. 2009)
5â•… Observing System Design and Assessment
145
for assimilation into coastal ocean models, and are likely to impose a significant constraint on the circulation over the continental shelf. Adjoint, or observation, sensitivities seek to quantify the sensitivity of a forecast to assimilated observations (Langland and Baker 2004). Specifically, adjoint sensitivity determines the sensitivity of the cost function J, with respect to each observation y: that is, dJ/dy. In practice, Langland and Baker (2004) provide a practical recipe for computing adjoint sensitivities as follows: 1. Define the error norm of interest (e.g., position of an eddy, or the variability of a given variable in a region of interest); 2. Perform a forecast from say, tâ•›=â•›0 to tâ•›=â•›7, where t denotes time; 3. Compute a verifying analysis for tâ•›=â•›7 (not in real-time); 4. Compute the difference between a forecast (valid at tâ•›=â•›7) and a verifying analysis (also valid at tâ•›=â•›7). This difference is an estimate of the forecast error. 5. Initialise the adjoint model with the forecast error and integrate backwards (from Tâ•›=â•›7 to tâ•›=â•›0), yielding a new initial condition (valid at tâ•›=â•›0); and 6. Calculate the sensitivity of the forecast to each observation, or a subset thereof. Like all variational data assimilation tools, a model’s tangent linear version and the adjoint of its tangent linear model are required to perform adjoint sensitivities. However, the adjoint technique requires a linear assumption that is probably most appropriate for short-term (days) forecast problems, but may not be valid for longer term (months) forecast problems, such as seasonal prediction using a coupled ocean-atmosphere model. Like analysis self-sensitivities, described above, adjoint sensitivities can help identify low-influence and high-influence observations; and can be partitioned for any data subset: instrument type, observed variable, geographic region, vertical level, or individual reporting platform; thereby making the diagnostic directly relevant to GOOS data providers. Importantly, both analysis and adjoint sensitivities do not necessarily quantify the value of the observations— rather they quantify how much of the observations are used by an assimilation and forecast system given the assumed error estimates therein. Like bred vectors, singular vectors are the fastest growing perturbations for a specific region at a specific time, and are most suited for adaptive sampling (e.g., Baker and Daley 2000). Unlike bred vectors, singular vectors are assumed to grow linearly in time. Singular vectors are perturbations with the greatest linear growth over a specified time interval, for a given norm, and defined over a specified target area. Singular vectors are only valid for time intervals for which the growth of a perturbation is linear. For the atmosphere, this is likely to be limited to a few days, and for the ocean it is possibly a week or two, depending on the underlying dynamics. To determine the growth of a perturbation over time a tangent-linear version of the full non-linear forecast model is required, along with the adjoint of the tangent linear model. Before one can compute the fastest growing perturbations an appropriate choice of norm must be made for each. Ideally the initial norm is related to the spatial distribution of expected errors in the analysis while the final-time norm should reflect the forecast errors of interest. In practice, in NWP the total energy is often used for
146
P. R. Oke and T. J. O’Kane
both initial and final time norms (e.g., at ECMWF). In practice mixed evolved and initial singular vectors are used in ensemble prediction allowing the growth rates of the perturbations to be tuned for a given application. The notion of a target area is important for the computation of singular vectors. Singular vectors are the initial perturbations that result in the fastest growing perturbations in a target region. For example, Fujii et€al. (2008) seek to predict the development of the Kuroshio meander with a lead time of 60 days. The target area is the region in which the Kuroshio typically meanders. Singular vectors are the perturbations, either within or outside of the target area, that result in large perturbations in the target area 60-days after initialisation. In NWP, the target area might be a major city and the time interval might be 10 days. The singular vectors are the initial perturbations that lead to large changes over that major city 10 days in the future. Different choices of time interval, norm, or target area lead to different sets of singular vectors (e.g., Palmer et€al. 1998; Snyder et€al. 1998). This is in contrast to bred vectors that are relatively insensitive to the choice of rescaling (e.g., Corazza et€al. 2003). An example of an adjoint-based method used to calculate forecast sensitivity is described by Fujii et€ al. (2008). They use the Multivariate Ocean Variational Estimation system to investigate the types of perturbations that influence the large meanders in the Kuroshio Current. Specifically, they show that the leading singular vector represents a growing perturbation that leads to further development of the large meander. Figure€5.12a shows the perturbation to vertical velocity and pressure at 820-m depth at initial time. The anticyclonic anomaly positioned at 133°E, 31°N causes cold advection across the Kuroshio Current and downwelling to the north. This results in the development of an anticyclonic circulation in the deep layers, and induces baroclinic instability. The corresponding anomalies to sea surface height (SSH) that coincide with these developments are summarized in Fig.€ 5.12b–d, showing the development of a large meander about two months after the initial perturbation. This analysis indicates that to properly predict the Kuroshio meander, a forecast model must be well constrained by data assimilation around 133°E, 31°N and particularly at depths of 1,000–1,500€m. Thus, additional observations in that region are likely to benefit the forecast of the variability of the Kuroshio Current.
5.4â•…Summary The use of models and data assimilation tools to aid the design and assessment of ocean observing systems is increasing. The most commonly used technique for evaluating the relative importance of existing observations is OSEs and OSSEs. OSEs are particularly useful for evaluating the relative importance of existing observations. But they are expensive to perform and analyse, and are sometimes difficult to evaluate and interpret. Despite this, as probably the simplest method for evaluating observing systems, OSEs are commonly used. OSSEs are most useful for examining the potential benefits of future observational platforms, and for contrasting the relative merits of different observational strategies. Like OSEs, OSSEs
5â•… Observing System Design and Assessment
147 Day 25
P and W @ 820 m
36°N
33°N
30°N
27°N
aa
b
130°E
135°E
140°E
– 1.4 – 1.0 – 0.6 – 0.2 0.2 0.6 1.0 1.4
b 130°E – 36
135°E – 20
W (m/s x 104) Day 40
36°N
–4 4
SLA (cm)
140°E 20
36
Day 65
33°N
30°N
d
c
27°N c 130°E – 135
d 140°E 130°E
135°E – 75
– 15 15
SLA (cm)
75
135
– 360 – 200
135°E – 40 40
140°E 200
360
SLA (cm)
Fig. 5.12↜渀 a Perturbation fields for pressure (↜contour; dotted lines are negative) and vertical velocity (↜shading; positive is downward) at 820-m depth. b–d SLA (scales are different for each panel) that result from the perturbations represented in panel (a) at day 0. Thick lines show the Kuroshio Current axis in the background state. (Adapted from Fujii et€al. 2008)
are easily implemented. However, OSSEs tend to return overly optimistic results, owing to the implicit dynamical consistency between the model-generated observations that are assimilated, and the models into which those observations are assimilated. Also, OSSEs are always limited by the realism of the models that are used. Like OSEs and OSSEs, analysis self-sensitivities can be computed from an assimilation system regardless of the assimilation methods being used. Analysis selfsensitivities quantify the relative importance of every assimilated observation for a given implementation. Unlike OSEs, analysis self-sensitivities are relatively inexpensive to compute and analyse, and could feasibly be implemented routinely by operational centers. In this case, analysis sensitivities could provide an up-to-date, routine evaluation of the current observing system. Such analyses could be very
148
P. R. Oke and T. J. O’Kane
beneficial to the observational community, by identifying existing and developing gaps in the GOOS. A range of ensemble-based techniques are available for observing system design and assessment. These include objective, ensemble-based array design (e.g., Sakov and Oke 2008), breeding (e.g., Toth and Kalnay 1997), and variants of breeding, like the ETKF (Bishop et€al. 2001). Ensemble-based methods generally require an ensemble-based data assimilation system, such as ensemble optimal interpolation (e.g., Oke et€al. 2008) or the ensemble Kalman Filter (Evensen 2003), for their application. Ensemble-based techniques are generally easily implemented, but often require significant computational resources and are subject to sampling error. Various adjoint-based methods are also suitable for observing system design and assessment. These include analysis of representers (e.g., Kurapov et€al. 2009), adjoint sensitivities (e.g., Langland and Baker 2004) and singular vectors (e.g., Fujii et€al. 2008). The application of adjoint-based techniques generally requires a system’s tangent linear model and its adjoint to be available. Bred vectors and singular vectors are somewhat analogous. Both methods diagnose the system’s fastest growing modes, or instabilities. With respect to adaptive sampling, regions where these modes project strongly might be good places to deploy additional observations. Assimilation of those additional observations may improve the initialisation of a forecast, thereby improving its forecast of the developing instability. Although bred vectors and singular vectors are very similar, in practice, breeding is much more easily implemented. Also the details of bred vectors are relatively insensitive to the details of the rescaling parameter, or norm, used in the breeding process, but bred vectors are sensitive to the rescaling interval. By contrast, singular vectors tend to be sensitive to the choice of norms used. The field of observing system design and assessment has seen many advances in techniques over the past decade. Together with the maturing nature of ocean forecasting, this has seen an increase in the use of models and data assimilation tools to aid the design and assessment of observing systems. The relevance of most methods depends on the realism of the models used. One way to combat this is to employ multiple methods and multiple models. Under the auspices of GODAE OceanView, it is hoped that this can be achieved through real international cooperation. Acknowledgments╇ Financial support for this research is provided by CSIRO, the Bureau of Meteorology, and the Royal Australian Navy as part of the Bluelink project, and the US Office of Naval Research (Grant No. N00014-07-1-0422). Satellite altimetry is provided by NASA, NOAA, ESA and CNES. Drifter data are provided by NOAA-AOML and SST observations are provided by NASA, NOAA and Remote Sensing Systems. Argo data are provided by the Coriolis and USGODAE data centres.
References Argo Science Team (1998) On the design and implementation of argo: an initial plan for a global array of profiling floats. International CLIVAR Project Office Rep. 21, GODAE Rep. 5, GODAE Project Office, Melbourne, Australia, p€32
5â•… Observing System Design and Assessment
149
Baker NL, Daley R (2000) Observation and background adjoint sensitivity in the adaptive observation targeting problem. Q J R Meteorologic Soc 126:1431–1454 Ballabrera-Poy J, Hackert E, Murtugudde R, Busalacchi AJ (2007) An observing system simulation experiment for an optimal moored instrument array in the tropical Indian Ocean. J Climate 20:3284–3299 Balmaseda MA, Anderson D, Vidard A (2007) Impact of argo on analyses of the global ocean. Geophys Res Lett 34. doi:10.1029/2007GL030452 Barth NH (1992) Oceanographic experiment design II: genetic algorithms. J Atmos Ocean Technol 9:434–443 Berry P, Marshall J (1989) Ocean modelling studies in support of altimetry. Dyn Atmos Oceans 13:269–300 Bishop CH, Etherton BJ, Majumdar SJ (2001) Adaptive sampling with the ensemble transform Kalman filter. Part I: theoretical aspects. Mon Weather Rev 129:420–436 Bishop CH, Reynolds CA, Tippett MK (2003) Optimization of the fixed global observing network in a simple model. J Atmos Sci 60:1471–1489 Bouttier F, Kelly G (2006) Observing-system experiments in the ECMWF 4D-Var data assimilation system. Q J R Meteorologic Soc 127:1469–1488 Brassington GB, Divakaran P (2009) The theoretical impact of remotely sensed sea surface salinity observations in a multi-variate assimilation system. Ocean Model 27:70–81 Brassington GB, Pugh T, Spillman C, Schulz E, Beggs H, Schiller A, Oke PR (2007) BLUElink> development of operational oceanography and servicing in Australia. J Res Pract Inf Techol 39:151–164 Cardinali C, Pezzulli S, Andersson E (2004) Influence-matrix diagnostic of a data assimilation system. Q J R Meterologic Soc 130:2767–2786 Chambers DP, Tapley DB, Stewart RH (1999) Anomalous warming in the Indian Ocean coincident with El Niño. J Geophys Res 104:3035–3047 Chapnik B, Desroziers G, Rabier F, Talagrand O (2006) Diagnosis and tuning of observational error statistics in a quasi operational data assimilation setting. Q J R Meteorologic Soc 132:543–565 CLIVAR–GOOS Indian Ocean Panel et€al (2006) Understanding the role of the Indian Ocean in the climate system—implementation plan for sustained observations. WCRP Informal Rep. 5/2006, ICOP Publ. Series 100, GOOS Rep. 152, p€76 Corazza M, Kalnay E, Patil D, Yang S-C, Morss R, Cai M, Szunyogh I, Hunt B, Yorke J (2003) Use of the breeding technique to estimate the structure of the analysis errors of the day. Nonlinear Process Geophys 10:233–243 Evensen G (2003) The ensemble Kalman filter: theoretical formulation and practical implementation. Ocean Dyn 53:343–367 Feng M, Meyers GA, Wijffels SE (2001) Interannual upper ocean variability in the tropical Indian Ocean. Geophys Res Lett 28:4151–4154 Fujii Y, Tsujino H, Usui N, Nakano H, Kamachi M (2008) Application of singular vector analysis to the Kuroshio large meander. J Geophys Res 113. doi:10.1029/2007JC004476 Gallagher K, Sambridge M, Drijkoningen G (1991) Genetic algorithms: an evolution from MonteCarlo methods for strongly non-linear geophysical optimization problems. Geophys Res Lett 18:2177–2180 Gelaro R, Buizza R, Palmer TN, Klinker E (1998) Sensitivity analysis of forecast errors and the construction of optimal perturbations using singular vectors. J Atmos Sci 55:1012–1037 Gelaro R, Langland RH, Rohaly GD, Rosmond TE (1999) As assessment of the singular-vector approach to targeted observing using the FASTEX dataset. Q J R Meteorologic Soc 125:3299– 3327 Guinehut S, Le Traon P-Y, Larnicol G, Phillips S (2004) Combining argo and remote-sensing data to estimate the ocean three-dimensional temperature fields: a first approach based on simulated observations. J Mar Sys 46:85–98 Hackert EC, Miller RN, Busalacchi AJ (1998) An optimized design for a moored instrument array in the tropical Atlantic Ocean. J Geophys Res 103:7491–7509
150
P. R. Oke and T. J. O’Kane
Heimbach P et€al (2010) Observational requirements for global-scale ocean climate analysis: lessons from ocean state estimation. In: Hall J, Harrison DE, Stammar D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€2. ESA Publication WPP-306, Venice, Italy, 21–25 Sept 2009 (submitted) Hernandez F, Le Traon P-Y, Barth N (1995) Optimizing a drifter cast strategy with a genetic algorithm. J Atmos Ocean Technol 12:330–345 Holland WR, Malanotte-Rizzoli P (1989) Assimilation of altimeter data into an ocean circulation model: space versus time resolution studies. J Phys Oceanogr 19:1507–1534 Houtekamer P, Derome J (1995) Methods for ensemble prediction. Mon Weather Rev 123:2181– 2196 Khare SP, Anderson JL (2006) An examination of ensemble filters based adaptive observation methodologies. Tellus 58A:179–195 Kuo TH, Zou X, Huang W (1998) The impact of global positioning system data on the prediction of an extratropical cyclone: an observing system simulation experiment. Dyn Atmos Oceans 27:439–470 Kurapov AL, Egbert GD, Allen JS, Miller RN (2009) Representer-based analyses in the coastal upwelling system. Dyn Atmos Oceans 48:198–218 Langland RH (2005) Issues in targeted observations. Q J R Meteorologic Soc 131:3409–3425 Langland RH, Baker NL (2004) Estimation of observation impact using the NRL atmospheric variational data assimilation adjoint system. Tellus 56A:189–201 Masumoto Y, Meyers GA (1998) Forced Rossby waves in the southern tropical Indian Ocean. J Geophys Res 103:27589–27602 McPhaden MJ et€ al (1998) The tropical ocean global atmosphere (TOGA) observing system: a decade of progress. J Geophys Res 103:14169–14240 Miller RN (1990) Tropical data assimilation experiments with simulated data: the impact of the tropical ocean, global atmosphere thermal array for the ocean. J Geophys Res 95:11461–11482 Moore AM, Farrell F (1993) Rapid perturbation growth on spatially and temporally varying oceanic flows determined using an adjoint method: application to the Gulf Stream. J Phys Oceanogr 23:1682–1702 Moore AM, Arango HG, Di Lorenzo E, Miller AJ, Cornuelle BD (2009) An adjoint sensitivity analysis of the southern California current circulation and ecosystem. J Phys Oceanogr 39:702–720 Murtugudde R, McCreary JP, Busalacchi AJ (2000) Oceanic processes associated with anomalous events in the Indian Ocean with relevance to 1997–1998. J Geophys Res 105:3295–3306 O’Kane TJ, Frederiksen JS (2008a) Statistical dynamical subgrid-scale parameterizations for geophysical flows. Phys Scr 2008(T132):014033. doi:10.1088/0031-8949/2008/T132/014033 O’Kane TJ, Naughton M, Xiao Y (2008) AGREPS: the Australian global and regional ensemble prediction system. ANZIAM J 50:C308–C321 Oke PR, Schiller A (2007) Impact of argo, SST and altimeter data on an eddy-resolving ocean reanalysis. Geophys Res Lett 34. doi:10.1029/2007GL031549 Oke PR, Schiller A, Griffin DA, Brassington GB (2005) Ensemble data assimilation for an eddyresolving ocean model of the Australian region. Q J R Meteorologic Soc 131:3301–3311 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink ocean data assimilation system (BODAS). Ocean Model 21:46–70 Oke PR, Balmaseda M, Benkiran M, Cummings JA, Dombrowsky E, Fujii Y, Guinehut S, Larnicol G, Le Traon P-Y, Martin MJ (2009) Observing system evaluations using GODAE systems. Oceanography 22(3):144–153 Oke PR, Balmaseda M, Benkiran M, Cummings JA, Dombrowsky E, Fujii Y, Guinehut S, Larnicol G, Le Traon P-Y, Martin MJ (2010) Observational requirements of GODAE Systems. In: Hall J, Harrison DE, Stammar D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€2, ESA Publication WPP-306, Venice, Italy, 21–25 Sept 2009 Palmer TN, Gelaro R, Barkmeijer J, Buizza R (1998) Singular vectors, metrics, and adaptive observations. J Atmos Sci 55:633–653
5â•… Observing System Design and Assessment
151
Rabier F, Courtier P, Pailleuz J, Hollingsworth A (1996) Sensitivity of forecast errors to initial conditions. Q J R Meteorologic Soc 122:121–150 Rabier F, Gauthier P, Cardinali C, Langland R, Tsyrulnikov M, Lorenc A, Steinle P, Gelaro R, Koizumi K (2008) An update on THORPEX-related research in data assimilation and observing strategies. Nonlinear Process Geophys 15:81–94 Rao SA, Behera SK (2005) Subsurface influence on SST in the tropical Indian Ocean: structure and interannual variability. Dyn Atmos Oceans 39:103–135 Sakov P, Oke PR (2008) Objective array design: application to the tropical Indian Ocean. J Atmos Ocean Technol 25:794–807 Schiller A, Wijffels SE, Meyers GA (2004) Design requirements for an Argo float array in the Indian Ocean inferred from observing system simulation experiments. J Atmos Ocean Technol 21:1598–1620 Schott FA, McCreary JP (2001) The monsoon circulation of the Indian Ocean. Prog Oceanogr 51:1–123 Schouten WP, de Ruijter M, van Leeuwen PJ, Dijkstra HA (2002) An oceanic teleconnection between the equatorial and southern Indian Ocean. Geophys Res Lett 29:1812. doi:10.1029/2001GL014542 Snyder C, Joly A (1998) Development of perturbations within a growing baroclinic wave. Q J R Meteorologic Soc 124:1961–1983 Tippett MK, Anderson JL, Bishop CH, Hamill TM, Whitaker JS (2003) Ensemble square root filters. Mon Weather Rev 131:1485–1490 Toth Z, Kalnay E (1997) Ensemble forecasting at NCEP and the breeding method. Mon Weather Rev 125:3297–3319 Tracton M, Kalnay E (1993) Operational ensemble prediction at national meteorological center: practical aspects. Weather Forecast 8:379–398 Tremolet Y (2008) Computation of observation sensitivity and observation impact in incremental variational data assimilation. Tellus 60:964–978 Vecchi GA, Harrison MJ (2007) An observing system simulation experiment for the Indian Ocean. J Climate 20:3300–3319 Vidard A, Anderson DLT, Balmaseda M (2007) Impact of ocean observation systems on ocean analysis and seasonal forecasts. Mon Weather Rev 135:409–429 Wang X, Bishop CH (2003) A comparison of breeding and ensemble transform Kalman filter ensemble forecast schemes. J Atmos Sci 60:1140–1158 Wei M, Frederiksen JS (2004) Error growth and dynamical vectors during southern hemisphere blocking. Nonlinear Process Geophys 11:99–118 Wei M, Toth Z, Wobus R, Zhu Y, Bishop CH, Wang X (2006) Ensemble transform Kalman filter-based ensemble perturbations in an operational global prediction system at NCEP. Tellus 58A:28–44 Wijffels SE, Meyers GA (2004) An intersection of oceanic waveguides: variability in the Indonesian throughflow region. J Phys Oceanogr 34:1232–1253
Part III
Atmospheric Forcing and Waves
Chapter 6
Air-Sea Fluxes of Heat, Freshwater and Momentum Simon A. Josey
Abstract╇ An overview of the air-sea fluxes of heat, freshwater and momentum is presented with the emphasis being on methods used to determine these fluxes and the role they play within the wider climate system. The equations used to determine the various heat flux components and the wind stress (which is equivalent to the momentum flux) are described in detail, together with the main spatial characteristics of the resulting global fields. This is followed by an overview of currently available flux datasets, including in situ, remotely sensed, atmospheric reanalysis and hybrid products. Methods for evaluation of these datasets are explored, including recent developments in the use of air-sea flux reference sites to discriminate between the different fields. Several topics that place surface fluxes in the context of global climate are then discussed including the ocean heat budget closure problem, climate change related trends in surface fluxes and impacts of extreme heat fluxes at high latitudes. Finally, some outstanding challenges are presented including the need for a better understanding of ocean-atmosphere interaction in the Southern Ocean and the potential for use of the integrated surface density flux to estimate variability in the Atlantic meridional overturning circulation.
6.1╅Introduction The exchanges of heat, freshwater and momentum between the oceans and the atmosphere play a pivotal role in the global climate system. In the tropics, there is a net input of heat to the ocean which is subsequently transported to mid-high latitudes and released back to the atmosphere, modifying the climate over land downstream (e.g. Rhines et€al. 2008). At several high latitude sites, intense winter heat loss (together with the effects of net evaporation and brine rejection associated with ice formation) drives deep convection and dense water formation, supplying the deep limb of the global overturning circulation. The wind stress on the ocean, which S. A. Josey () National Oceanography Centre, Southampton, UK e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_6, ©Â€Springer Science+Business Media B.V. 2011
155
156
S. A. Josey
is equivalent to the momentum exchange, is the other major driver of the circulation, and regional wind forcing also plays a key role in dense water formation through preconditioning of water masses as a result of upwelling. The freshwater flux (evaporation-precipitation) has a major impact on the ocean surface salinity field which to a large extent reflects the pattern of surface net evaporation. Despite their major role in the climate system, our level of knowledge regarding many aspects of ocean-atmosphere interaction remains at a basic level. Attempts to develop global datasets of these fluxes have been severely hampered by the lack of observations in many regions. The primary source of data has historically been merchant ship meteorological reports which tend to follow the main shipping routes leaving large areas of the ocean, particularly the Southern Ocean, extremely undersampled. This situation has improved for some flux related variables (sea surface temperature, wind speed) with the advent of satellite observations but these are only available for the past two decades and do not as yet provide reliable estimates of all terms in the surface heat budget. Anthropogenic climate change is widely expected to lead to changes in the fluxes of heat and freshwater as a result of global warming and strengthening of the hydrological cycle. There is compelling evidence that an increase in global ocean heat content has already happened (e.g. Levitus et€al. 2009) and this implies an increase in the global mean net ocean heat gain. However, the expected change is small, only about 0.5€W€m−2. This signal is too small to be detectable given the accuracy of currently available heat flux datasets and this situation is unlikely to change in the near future. A strengthening of the hydrological cycle will influence the ocean-atmosphere exchange of freshwater and potentially leave an imprint in ocean salinity. Due to problems with obtaining reliable precipitation measurements, the level of uncertainty in freshwater flux datasets is greater than that for heat flux and it is again difficult to detect anthropogenic climate change in this variable. However, there is some evidence that changes in the hydrological cycle have modified ocean salinity as this acts as an integrator of variations in the surface freshwater exchange (Stott et€al. 2008). In this paper, I provide a short overview of the current state of ocean-atmosphere interaction research. A thorough review of all aspects of air-sea exchanges was carried out by the Working Group on Air-Sea Fluxes in the late 1990s (WGASF 2000) and this remains a major resource which the interested reader is recommended to consult. A further valuable point of reference from the perspective of the ocean observing system is the Plenary White Paper on air-sea fluxes prepared for Ocean Obs’09 (Gulev et€al. 2009). Progress in understanding ocean-atmosphere interaction in the face of a fundamental sampling problem and uncertainty over significant elements of the underlying physics has been the result of dedicated efforts by a wide international research community. I have attempted to summarise some of the key results here from a personal perspective which stems from my own research developing and analysing in situ observation based fields and more recently studying the wider role of fluxes using coupled models. I began my research career studying a very different class of surface flux, the effects of the infalling flux of primordial gas (primarily neutral hydrogen) onto the discs of spiral galaxies (Josey and Tayler 1991; Josey and Arimoto 1992). This presents a very different set of research prob-
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
157
lems but provides an interesting alternative perspective on the effects that surface exchanges have on a system. I count myself lucky to have worked initially in this field and subsequently on the equally fascinating, and arguably more important, role of surface fluxes in the global climate system. Following this introduction, an outline of the formulae used to estimate surface fluxes is given in Sect.€6.2 and an overview of the different flux datasets in Sect.€6.3. Flux evaluation methods are then considered in Sect.€6.4. Several issues related to the role of surface fluxes in the global climate system are discussed in Sect.€6.5, while the final Sect.€6.6 highlights several outstanding issues and potential future applications of the air-sea exchanges, particularly as regards estimates of variability in the ocean overturning circulation.
6.2╅Surface Flux Theory 6.2.1 Flux Components and Spatial Variation The net air-sea heat flux is the sum of four components: two turbulent heat flux terms (the latent and sensible heat fluxes) and two radiative terms (the shortwave and longwave fluxes). These are shown schematically in Fig.€6.1 together with their global mean values from a globally balanced air-sea heat flux dataset (Grist and Josey 2003).
Fig. 6.1↜渀 Schematic representation of the different components of the air-sea heat exchange with global annual mean values of the key terms from a balanced flux dataset. (Grist and Josey 2003)
158 80
S. A. Josey NOC1.1a Latent Heat Flux
40
200
80
100
40
0
0
– 100
– 40
– 200 80
NOC1.1a Longwave Flux
40
80 40 0 – 40
0 – 100 – 200
100
40
– 200 NOC1.1a Net Heat Flux
100
– 40 80
– 100
– 40
200
0
200
0
0
NOC1.1a Sensible Heat Flux
0 – 40
NOC1.1a Shortwave Flux
200 100 0 – 100 – 200
200 100 0 – 100 – 200
Fig. 6.2↜渀 Climatological annual mean fields of the different heat flux components and the net heat flux. (Source: National Oceanography Centre 1.1a (NOC1.1a) flux climatology, units W€m−2, Grist and Josey 2003)
Climatological annual mean fields of the different components and net heat flux are shown in Fig.€6.2. The sign convention is for positive fluxes to represent heat gain by the ocean. For the turbulent heat flux components (i.e. the sensible and latent terms), the areas of strongest loss are over the Gulf Stream and Kuroshio with latent heat losses of order 200€W€m−2. Enhanced latent heat loss is also seen in the South-East Indian Ocean where the trade winds are particularly strong. The sensible heat flux is typically much smaller in magnitude than the latent term, the strongest losses occur in regions where very cold air is advected over the ocean from neighbouring land masses particularly the Labrador and Norwegian Seas. The global variation in the net longwave flux is relatively small, typical values ranging from 30–70€W€ m−2. However, within this range there is a degree of structure which reflects the balance between the sea-air temperature difference, the cloud cover and the amount of water vapour. The most noticeable feature is a band of reduced longwave loss under the Inter-Tropical Convergence Zone (ITCZ). In contrast, the shortwave field has a primarily meridional variation determined by the mean solar elevation with peak values of order 200€W€m−2. The main departures from this variation occur under regions of increased cloud cover such as the ITCZ. Finally, the net heat flux field is seen to be dominated by the contributions from the
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
159
shortwave and latent heat fluxes with shortwave driven ocean heat gain in the Tropics and latent heat driven ocean heat loss over the western boundary current regions. The processes controlling these exchange terms and methods for their estimation are discussed below, for a more detailed review see WGASF (2000).
6.2.2 Turbulent Flux Bulk Formulae The latent and sensible heat fluxes are proportional to the products of the near surface wind speed with the sea-air humidity and sea-air temperature difference respectively. However, the detailed form of these relationships remains poorly known under certain conditions, in particular at high wind speeds and this provides a significant source of uncertainty in estimates of these fluxes. The sensible and latent heat fluxes, QH and QE, are generally determined using the following bulk formulae:
QH = ρcp Ch u(Ts − (Ta +γ z))
(6.1)
QE = ρLCe u(qs − qa )
(6.2)
where ρ is the density of air; cp , the specific heat capacity of air at constant pressure; L, the latent heat of vaporisation; Ch and Ce, the stability and height dependent transfer coefficients for sensible and latent heat respectively; u, the wind speed; Ts, the sea surface temperature; Ta, the surface air temperature with a correction for the adiabatic lapse rate, , z, the height at which the air temperature was measured; qs, 98% of the saturation specific humidity at the sea surface temperature to allow for the salinity of sea water, and qa, the atmospheric specific humidity. A major amount of research has been devoted over the past few decades to accurately determining values for the transfer coefficients and their functional dependence on wind speed and near surface stability by means of direct flux measurements, in particular through the eddy correlation method. This work has lead to the development of the COARE flux algorithm (Fairall et€al. 2003) which has greatly reduced uncertainty in the values of the transfer coefficients although questions still remain in several areas, particularly the high wind speed regime and inclusion of the effects of sea spray.
6.2.3 Radiative Flux Parameterisations The shortwave flux is primarily a function of solar elevation and cloud amount with an additional dependence on ocean albedo. The longwave (infrared) flux is the difference between large downwelling and upwelling terms from the ocean and atmosphere respectively and depends on sea surface temperature, air temperature and humidity in addition to cloud amount. The longwave and shortwave flux components have been determined using a wide range of empirical formulae over the years
160
S. A. Josey
(e.g. Clark et€al. 1974; Bignami et€al. 1995; Josey et€al. 2003). The performance of several bulk formula parameterisations for the net longwave flux has been assessed by comparison with radiometer measurements made at sea during a number of cruises (Josey et€al. 1997). More recently Josey et€al. (2003) carried out a detailed evaluation of both the Clark et€al. (1974) and Bignami et€al. (1995) formulae using measurements made on a long meridional research cruise from 20–63°N at 20°W in the North Atlantic. This analysis made use of recent advances in understanding of various biases in the pyrgeometer instrument used to measure the longwave flux (Pascal and Josey 2000). Neither formula was found to be capable of providing reliable estimates of the atmospheric longwave flux over the full range of latitudes. The Clark formula overestimated the cruise mean measured longwave flux of 341.1€W€m−2 by 11.7€W€m−2, while Bignami underestimated by 12.1€W€ m−2. Josey et€ al. (2003) developed an alternative formula which expresses the combined effects of cloud cover and other relevant parameters on the atmospheric longwave in terms of an adjustment to the measured air temperature. The net longwave flux, QL, across the ocean-atmosphere interface is given by:
QL = QLS − (1 − αL )QLA
(6.3)
4 QLA = σSB TEff
(6.4)
where QLS is the emitted longwave radiation from the sea surface, QLA is the downwelling longwave radiation from the atmosphere, and the coefficient (1−αL ), where αL is the longwave reflectivity, takes account of the component of the downwelling radiation reflected from the sea surface. They characterise the downwelling longwave radiation by an effective blackbody temperature, TEff, such that,
where σSB is the Stefan-Boltzmann constant (5.67â•›×â•›10−8€W€m−2 K−4). Given that the observed variable is Ta instead of TEff, they write TEff as the sum of Ta and a temperature adjustment, ∆Ta, which includes the effects of cloud cover, atmospheric humidity and other, as yet unknown, variables on the downwelling longwave, such that,
QLA = σSB (Ta + Ta )4
(6.5)
∆Ta is thus the difference between the measured air temperature and the effective temperature of a blackbody which emits a radiative flux equivalent to the atmospheric longwave. The problem of obtaining a reliable estimate for QLA then becomes one of parameterising the dependence of ∆Ta on cloud cover, vapour pressure and any other relevant variables. The air temperature is adjusted by the amount necessary to obtain the effective temperature of a blackbody with a radiative flux equivalent to that from the atmosphere. A simple parameterisation of the temperature adjustment solely in terms of the total cloud amount leads to a net longwave flux formula which has an improved mean bias error with respect to the cruise measurements of −1.3€W€m−2. The new formula still exhibits significant biases under certain situations, in particular overcast, low cloud base conditions at high latitudes. However, by modify-
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
400 Estimated Longwave, W m– 2
Fig. 6.3↜渀 Comparison of the atmospheric component of the net longwave flux estimated using Eq.€(6.6) with measurements made on a research cruise in the North Atlantic. (Modified version of figure from Josey et€al. (2003), copyright American Geophysical Union)
161
350
300
250 250
300 350 Measured Longwave, W m– 2
400
ing this formula to include a dependence on the dew point depression, good agreement between the measured and estimated mean longwave over the full range of observations can be obtained and the mean bias error reduced to 0.2€W€m−2 (see Fig.€6.3). The resulting formula for the net longwave flux is as follows:
QL = εσSB Ts4 − (1 − αL )σSB {Ta + an2 + bn + c + 0.84(D + 4.01)}4 (6.6)
where is the emissivity of the sea surface, taken to be 0.98, αLâ•›=â•›0.045 and n is the fractional cloud cover. The terms a, b and c are empirical constants and D is the dew point depression, Dâ•›=â•›TDewâ•›−â•›Ta, where TDew is the dewpoint temperature (i.e. the temperature at which it becomes saturated) of the air in the surface layer. The new formula was tested using independent measurements made on two more recent cruises and found to perform well, agreeing to within 2€W€m−2 in the mean, at midhigh latitudes. In contrast, to the formulae for the sensible, latent and longwave fluxes which may be used with individual ship meteorological reports, widely-used formulae for the net shortwave flux typically provide monthly mean values. In particular, the following formula of Reed (1977) provides the monthly mean net shortwave flux,
QSW = (1 − α)Qc [1 − 0.62 n + 0.0019 θ N ]
(6.7)
where α is the albedo, Qc is the clear-sky solar radiation, n is the monthly mean fractional cloud cover and θ N is the monthly mean local noon solar elevation. Gilman and Garrett (1994) note that under conditions of low cloud cover, the Reed formula estimate of the mean incoming shortwave can become greater than the clear-sky value if θ N is sufficiently large. and suggest that the incoming shortwave be constrained to be less than or equal to Qc .
162
S. A. Josey
Finally, the net heat flux, QNet, is given by the sum of the four individual components, (6.8)
QNet = QE + QH + QL + QSW
where QE is the latent heat flux; QH, the sensible heat flux; QL, the longwave flux and QSW, the shortwave flux.
6.2.4 Wind Stress Estimates of the zonal, τx , and meridional, τy , components of the sea surface wind stress are typically obtained using the following equations, τx = ρCD ux (ux2 + uy2 )1/2
(6.9)
τy = ρCD uy (ux2 + uy2 )1/2
where ux and uy are the zonal and meridional components of the wind speed respectively, and CD is the drag coefficient which depends upon the height of the wind measurement and the atmospheric stability as well as wave characteristics (e.g. Smith 1988; Taylor and Yelland 2001). Climatological analyses of the wind stress using these formulae with ship meteorological reports have been carried out in a number of studies (e.g. Hellerman and Rosenstein 1983; Harrison 1989; Josey et€al. 2002). More recently various satellite products have become available which avoid the sampling issues inherent with ship observations but are restricted to the past decade or so, for example microwave scatterometer measurements made by QuikSCAT (http://winds.jpl.nasa.gov/). The climatological annual mean wind stress field from the NOC1.1 flux dataset is shown in Fig.€6.4. The figure reveals patterns associated with subtropical and subpolar gyres, the ITCZ and the band of intense westerly wind stress in the Southern
Latitude
NOC1.1 Wind Stress - Annual Mean (N m– 2) 80 60 40 20 0 – 20 – 40 – 60 – 80
0.2 0.15 0.1 0.05
80
70
60
50 Longitude
40
30
0
Fig. 6.4↜渀 Climatological annual mean wind stress, source NOC1.1 climatology, units N€m−2, Josey et€al. 2002. Colours show the magnitude of the wind stress vectors. (Modified version of figure in Josey et€al. (2002), copyright American Meteorological Society)
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
163
Ocean. The curvature of the wind stress field is a measure of local upwelling and downwelling and its integral at a given latitude provides a measure of the strength of the wind driven circulation via the Sverdrup transport (for further discussion of these fields with reference to the NOC1.1 climatology see Josey et€al. 2002).
6.2.5 Freshwater Flux The air-sea freshwater flux is simply the difference between evaporation lost from the ocean surface and precipitation gained by the ocean from the atmosphere, often written E-P (i.e. evaporation-precipitation). It is linked to the net heat flux as the evaporation term corresponds to the latent heat flux component of the net heat exchange discussed above. Estimates of the evaporation are available from ship-based flux datasets, atmospheric model reanalyses and satellite measurements. Various precipitation products are available from satellites (Gulev et€al. 2009), for example as a result of the Global Precipitation Climatology Project Version 2 (GPCPv2, Adler et€al. 2003). However, there are significant regional differences between the various products and as a consequence precipitation is the least well determined surface exchange field. Atmospheric model reanalyses also provide precipitation but here care must be taken as unphysical trends have been observed in some areas, particularly for the European Centre for Medium Range Weather Forecasting (ECMWF) reanalysis in the Tropics. Precipitation is difficult to measure directly at sea (Weller et€al. 2008) but may be estimated from present weather codes in voluntary observing ship meteorological reports (via limited historical calibration against island station rain measurements) and was included in the NOC1.1 flux dataset (Josey et€al. 1999). However, further work is needed before this method can be reliably used for climate studies.
6.2.6 Density Flux The combined impact of the net heat flux and evaporation on the buoyancy of water in the sea surface layer may be expressed in terms of the density flux. The total density flux, Fρ , into the ocean surface is given by the following equation, QNet E−P (6.10) Fρ = −ρ α − βS ρcP (1 − S/1000) where ρ is the density of water at the sea surface; cP, the specific heat capacity of water; S, the sea surface salinity and α and β, the thermal expansion and haline contraction coefficients which are defined as follows,
α=−
1 ∂ρ ; ρ ∂T
β=
1 ∂ρ ρ ∂S
(6.11)
164
S. A. Josey
The density flux is frequently split into thermal, FT, and haline, FS, contributions defined as follows, where,
(6.12)
Fρ = FT + FS
FT = −α
QNet ; cP
FS = ρβS
E−P (1 − S/1000)
(6.13)
Heat loss from the ocean (QNet╛<╛0) and net evaporation (E╛>╛P) then result in positive values for FT and FS respectively and an increase in the density of the near surface layer. The thermal term usually dominates the density flux with the haline term playing only a minor role (e.g. Josey 2003; Grist et€al. 2007) except at high latitudes.
6.3â•…Air-Sea Flux Datasets The three primary sources of information regarding air-sea fluxes are surface meteorology reports (mainly from Voluntary Observing Ships), satellite observations and atmospheric model reanalyses which assimilate various data types. All three sources have been employed with the bulk formulae (Eqs.€6.1 and 6.2) to estimate the latent and sensible heat fluxes given a knowledge of the surface meteorology. The radiative fluxes have been determined either from empirical formulae, of the type described in the previous section, or from radiative transfer models. Many air-sea flux datasets have been developed over the past four decades. For example, the pioneering effort of Bunker (1976) relied on merchant ship meteorological reports, while in recent years satellite observations and output from numerical weather prediction models have been combined in new hybrid products (e.g. Yu and Weller 2007). The first flux datasets comprised climatological monthly fields of ether the full set or a subset of the heat, momentum and freshwater fluxes typically based on observations spanning many decades. In the 1990s, several analysis efforts continued to focus on producing climatological fields and addressing specific scientific problems—principally achieving closure of the global ocean heat budget—but in addition provided the individual monthly fields on which the climatologies were based (da Silva et€al. 1994; Josey et€al. 1998). In recent years, climatological fields have taken a back seat and several new flux products contain fields at daily timescales as well as monthly. This tendency has been driven, in part, by the high time resolution possible with the atmospheric reanalyses and the need to include high frequency variability in forcing fields for ocean model runs. A full survey of the wide range of methods used to produce flux datasets and the details of the underlying observing system is beyond the scope of the current paper. Instead an overview of the main classes of flux datasets is presented and the interested reader is referred to WGASF (2000) and Gulev et€al. (2009) for further details.
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
165
6.3.1 In Situ Observation Based Fields The only source of information regarding air-sea fluxes for many years were routine merchant ship meteorological reports collected under the Voluntary Observing Ships (VOS) programme and collated to form the Comprehensive Ocean-Atmosphere Dataset (COADS, Woodruff et€ al. 1987) which has now become International COADS (Worley et€al. 2005, 2009). Estimates of the various surface heat flux components were obtained either from individual surface meteorology reports, or from monthly averaged values of the key variable such as wind speed (although this has the potential to lead to biases as a result of neglected correlations between the different variables, Josey et€al. 1995) using formulae of the type discussed in Sect.€6.2. The resulting flux estimates are then combined using various averaging and interpolation techniques to form gridded fields. Two widely used flux products developed using this approach have been the UWM/COADS dataset of da Silva et€al. (1994) and the National Oceanography Centre 1.1 (NOC1.1) flux dataset (Josey et€al. 1998, 1999—formerly termed the Southampton Oceanography Centre (SOC) flux climatology), recently revised using optimal interpolation (NOC2, Berry and Kent 2009) to include error estimates (Kent and Berry 2005). The major problem with ship based flux datasets is the uneven distribution of meteorological reports, which are heavily concentrated along the major shipping routes, leading to significant undersampling of the required fields in many regions—including much of the Southern Hemisphere (for example see Fig.€6.2 of Josey et€al. 1999). This is likely to have played a major role in the ocean heat budget closure problem which has affected to a certain extent all flux datasets produced to date and is manifest as a 20–30€W€m−2 global mean net ocean heat gain while in reality the budget should be closed to of order 1€W€m−2 at decadal and longer timescales. We will return to this issue in Sect.€ 6.5.1 but note here that several flux datasets have achieved closure by applying inverse analysis techniques with hydrographic observations of ocean heat transport as constraints (e.g. the NOC1.1a fields described in Grist and Josey (2003) which are an adjusted, globally balanced version of the original NOC1.1 climatology). A further issue with ship based fluxes is the diverse range of instrumentation types used for making the routine meteorological measurements (e.g. air temperature, specific humidity) under the VOS programme. Each sensor type has its own error characteristics that need to be determined in order to correct for biases prior to determining the fluxes (e.g. Josey et€al. 1999). A recent development, targeted at reducing these errors is the VOS Climate Project (VOSCLIM) originally suggested by Taylor et€al. (2001). One of the goals of this project is to provide a high-quality VOS data subset that can be used to better calibrate the VOS fleet as a whole. A further initiative, the Shipboard Automated Meteorological and Oceanographic System (SAMOS, Smith et€al. 2010), seeks to collect high quality meteorological and flux measurements from research ships and provide these as a resource which may be used for better determination of biases in both the VOS measurements and other flux products (e.g. the reanalyses). SAMOS has focused on data obtained from the
166
S. A. Josey
US research ships but provides an example which, if applied internationally, would create an even more valuable resource.
6.3.2 Remotely Sensed fluxes Remote sensing is now capable of providing observations of some of the key airsea flux terms and has the major advantage over ship based estimates of essentially complete global coverage. However, satellite estimates suffer because it is not yet possible to reliably measure near surface air temperature and humidity directly from space. Indirect techniques must be used instead and this leads to a major source of uncertainty in the turbulent heat flux terms which are critically dependent on the sea-air temperature and humidity difference. Estimates of the radiative flux terms are available from various sources, most recently from the Moderate Resolution Imaging Spectro-radiometer (MODIS, e.g. Pinker et€al. 2009) and have been combined with indirect estimates of the turbulent fluxes to form net heat flux products; a recent example is the Hamburg Ocean-Atmosphere Parameters and Fluxes from Satellite Data version 3 (HOAPS3, Andersson et€ al. 2010). However, significant uncertainties remain in such net heat flux fields because of problems with determining the latent and sensible heat flux. In contrast, to the net heat flux, the wind stress is now well determined as a result of QuikSCAT although there are concerns as to whether this will remain the case in the near future given the likely imminent demise of this mission. Precipitation has also been determined using various techniques including infrared measurements of cloud top brightness temperature, which acts as a proxy for rain rate, and passive microwave measurements. Such estimates have been combined under the Global Precipitation Climatology project (GPCP) to form best estimates of the rainfall (CPCPv2, Adler et€ al. 2003). However, validation of these fields over the ocean is challenging due to the lack of high quality measurements from rain sensors and the difficulty with making this measurement (e.g. Weller et€al. 2008). As a consequence, major uncertainty remains in the precipitation fields with knock-on effects for attempts to estimate the air-sea freshwater flux (E-P).
6.3.3 Atmospheric Model Reanalyses Numerical weather prediction models assimilate a wide range of observations including surface meteorological reports, radiosonde profiles and remote sensing measurements. In recent decades, these models have had the potential to provide the complete set of air-sea flux fields at high (6€hourly) resolution with full spatial coverage. However, they are of course dependent on the model physics which, although constrained to some extent by the assimilated observations, has the potential to produce large biases, particularly in the radiative flux fields and precipitation (e.g. Trenberth et€al. 2009). Fixed versions of the models run over multidecadal pe-
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
167
riods are commonly referred to as atmospheric reanalyses—the two major products being those from the National Center for Environmental Prediction and the National Center for Atmospheric Research (NCEP/NCAR) and ECMWF. For the reanalyses, the turbulent flux terms are again estimated from the model surface meteorology fields while the shortwave and longwave flux are output from the radiative transfer component of the atmospheric model. To date, available reanalyses have been on a relatively coarse grid on the order of 1.5–2°. However, higher resolution reanalyses are anticipated in the near-term which will, for the first time, assimilate radiance measurements from satellites. There are hopes that these new products will contain smaller biases than those currently available (Trenberth et€al. 2009).
6.3.4 Other Flux Products In addition to the three primary classes of flux dataset described above, flux fields are available from several other types of products. The leading example here is the Objectively Analyzed air-sea Fluxes (OAFLUX) dataset (Yu and Weller 2007) which blends reanalysis and satellite surface meteorology fields prior to estimation of the fluxes, but still suffers from being unable to close the global ocean heat budget. A further product, combining reanalysis and satellite measurements, is the Common Ocean Reference Experiment (CORE) flux dataset (Large and Yeager 2009) which has been designed to provide forcing fields for ocean models. This requires closure of the ocean heat budget and this has been achieved via adjustments to several of the underlying fields which, although plausible, are not the result of comprehensive analysis. Thus this product must be regarded as a possible solution to the closure problem rather than necessarily being the correct solution. The climatological annual mean net air-sea heat flux field for the mid-latitude North Atlantic from four different flux products (including OAFLUX) is illustrated in Fig.€6.5. The same broad scale pattern is observed for each dataset with strong heat loss over the Gulf Stream and a transition towards ocean heat gain from west to east. The NCEP/NCAR fields tend to have stronger heat loss than the other 3 datasets considered and this is partly due to use of a transfer coefficient scheme which results in high values that are not supported by observational analyses. NOC1.1, NOC2 and OAFLUX all show similar results for the location of the zero net heat flux line which extends from south-west to north-east across the basin. Surface fluxes are also available from various ocean synthesis efforts, that is ocean models with data assimilation such as the Estimating the Circulation and Climate of the Ocean (ECCO) model. These are typically forced by NCEP or ECMWF reanalysis fields which are then adjusted as a result of the assimilation process. For the ECCO model, in some regions, comparisons against independent measurements suggest the resulting fields may be an improvement over the original forcing data (Stammer et€al. 2004). However, there remains a high degree of divergence between the different ocean model syntheses, and although this method holds some promise, it is not yet at the stage where it can provide reliable estimates of the surface
168
S. A. Josey NCEP/NCAR
NOC1.1
150
150
60
60
100
100 55
55
50
50 50
50 0
45
– 50
40
0
45
– 50
40
– 100 35
35
b
a
30 – 80 – 60 – 40 – 20
– 100
0
20
40
NOC2
– 150 30 – 80 – 60 – 40 – 20
– 150 0
20
40
OAFLUX
150
150
60
60 100
100 55
55
50
50 50
50 0 45 – 50
40
– 50
40
– 100
– 100 35
35 30
0
45
c
– 80 – 60 – 40 – 20
– 150 30 0
20
40
d
– 80 – 60 – 40 – 20
– 150 0
20
40
Fig. 6.5↜渀 Annual mean net air-sea heat flux from (a) NCEP/NCAR, (b) NOC1.1, (c) NOC2 and (d) OAFLUX for the common period 1984–2004, units W€m−2. Blue colours indicate ocean heat loss to the atmosphere, red indicate ocean heat gain
exchanges. Finally, the so-called residual method obtains the net surface heat flux as the residual of top of the atmosphere heating, measured by satellites, and the atmospheric heat divergence obtained from reanalysis (e.g. Trenberth and Caron 2001). This method has the potential to provide a valuable complementary estimate of the net heat exchange (but not of course the individual components). However,
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
169
it is dependent on the accuracy of the atmospheric reanalysis which as noted above requires improvement. Each of these classes of flux product has its own advantages and disadvantages and it is not possible to recommend a best flux product; rather, the choice of flux dataset must be guided by the scientific issue which is to be addressed.
6.4â•…Methodology for Evaluating Surface Fluxes The discussion above has provided some indication of the diverse range of air-sea flux datasets that are now available for the community to use. All of these are limited in some manner by spatially and temporally dependent biases and it is therefore vital that each new flux dataset is properly evaluated against a range of independent measures in order to quantify these biases and understand their causes. Historically, this has not been the case, partly because of a lack of reference data. This issue has been recognised for some time, in particular Josey and Smith (2006) developed a methodology for evaluation of air-sea heat, freshwater and momentum flux datasets in response to a recommendation of the CLIVAR Global Synthesis and Observation Panel (GSOP). The panel recognised the need for such guidelines in order to facilitate consistent evaluation and intercomparison of the many new flux datasets being developed (particularly those from ocean reanalyses). The methodology makes use of both research quality data from flux buoys and research vessels (local evaluation) and large scale constraints (regional and global evaluations). For clarification of terminology, Josey and Smith (2006) defined two main classes of flux dataset. The first consists of the large scale ‘gridded flux datasets’ (typically at spatial resolutions of order 1° and timescales from 6€hourly to monthly) produced from in situ, model or remote sensing sources, or some combination thereof. The second class of datasets was termed ‘research quality data’, most of which are in-situ point measurements (for example radiative fluxes and meteorological variables from research buoys/vessels) at high temporal resolution (typically available as averages on timescales of order minutes). In summary, their key evaluation points are as follows: a. Local evaluation of time averaged fluxes and meteorological variables at specific grid locations with corresponding research quality data from surface flux reference moorings and vessels. b. Regional evaluation of either gridded flux product ocean transports or, preferably, area averaged fluxes with corresponding research quality data from hydrographic sections. c. Global evaluation of gridded flux product area weighted mean fluxes through closure of the appropriate property budget within observational constraints. They noted several difficulties in implementing this method including the lack of a central archive of heat and freshwater transports required for point b. This remains a problem at present and the creation of such an archive would be highly
170
S. A. Josey
desirable for flux evaluation studies. Despite these problems this methodology has been adopted to some extent in recent studies particularly for the OAFlux and CORE products (Yu and Weller 2007; Large and Yeager 2009). Evaluations of flux products in specific air-sea interaction regimes using flux reference buoys are becoming more common practice as the global distribution of such buoys increases, fostered through the OceanSITES programme (Send et€al. 2009). A recent example is an evaluation of the new satellite based J-OFURO2 flux dataset using two moorings in the Kuroshio region of the north-west Pacific Ocean (Tomita et€al. 2010).
6.5â•…Surface Fluxes in the Global Climate System 6.5.1 T he Implied Ocean Heat Transport and the Closure Problem The excess of heat gain over heat loss in the Tropics, as revealed in the net heat flux spatial field (Fig.€6.2), requires that the oceans transport energy away from the equator and towards the poles. Evidence for this latitudinal variation is provided by direct estimates of the ocean heat transport from hydrographic sections, which were collected in significant numbers for the first time as part of the World Ocean Circulation Experiment (WOCE); this variation is illustrated by the crosses in Fig.€6.6. In addition to the direct estimates of the heat transport, indirect estimates, Hϕ , may be obtained by integrating the net heat flux, QN, across successive latitude bands from a reference latitude ϕo which has a known value of the heat transport, Ho, from hydrography,
Hϕ = Ho −
ϕo λ2
QN dλdϕ
(6.14)
ϕ λ1
where λ1 and λ2 are the longitude limits at the western and eastern continental boundaries respectively of a given latitude band. The general form for this equation includes a term that accounts for heat storage by the ocean. However, as heat storage is relatively small at multi-decadal timescales, the storage term may be set equal to zero for calculating the implied climatological transport. Taking this approach, the implied ocean heat transports obtained with a range of surface flux datasets for the Atlantic, Pacific and Global Oceans are shown in Fig.€6.6. These reveal a peak in the transport values at about 20°N although the details differ between the datasets. In some cases, the hydrography can be used to indicate problems with the surface forcing fields, for example the ECMWF product diverges from hydrography in the southern hemisphere. It should, however,
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
171
Fig. 6.6↜渀 Climatologically implied ocean heat transport derived by integrating the net heat surface flux southwards from 65°N. Key: ECMWF—Dash-dot red; Large and Yeager (2009)—Dashed blue; NCEP—Dashed magenta; NOC1.1a—Solid black; Trenberth residual—Dashed black; UWM/COADS—Solid grey. The crosses with error bars represent direct hydrographic estimates of the heat transport—updated version of Fig.€6.9. (In Grist and Josey (2003), copyright American Meteorological Society)
be noted that all of the flux products shown have been adjusted either directly or indirectly to achieve global closure and this to some extent ensures agreement with the hydrography. In the case of the reanalyses, the values for the transfer coefficients in the turbulent flux formulae are higher than can be supported by observations (e.g. Renfrew et€al. 2002). NOC1.1a and UWM/COADS have been made to agree with at least some of the hydrographic values using the technique of inverse analysis, first applied by Isemer et al. (1989). Most recently, the Large and Yeager (2009) fields have been modified using various plausible adjustments as noted earlier. Without such adjustments, the implied ocean heat transport would diverge rapidly from the hydrographic values and this is a manifestation of the more
172
S. A. Josey
general ocean heat budget problem i.e. the inability to close the global ocean heat budget at decadal timescales to within the 1€W€m−2 required to avoid unrealistically large warming signals. The budget closure problem has been recognised for many years and, despite various advances in our understanding of air-sea interaction, it remains a major issue for both ship based (e.g. NOC1.1 and NOC2) and remote sensing/reanalysis hybrid products (OAFLUX) all of which have global mean net heat flux values in the range 20–30€W€m−2. Progress towards resolution of this problem has been limited and it is likely to be the result of the combination of various small biases which amount to 3–5€W€m−2 in the global mean. These are likely to include (i) sampling issues revolving around the gross deficit of information on air-sea exchange in the Southern Hemisphere, (ii) missing physics in the high and low wind speed regime applications of the turbulent bulk flux formulae, (iii) a potential fair weather bias i.e. avoidance of high wind regions in merchant ship reports which will affect both in situ climatologies (directly) and reanalyses (indirectly as they rely on surface observations in the data assimilation), (iv) residual biases in ship meteorological reports which have yet to be determined, (v) uncertainty in the empirical formulae used to estimate the radiative fluxes (in situ based fields) and problems with representation of clouds (reanalyses). Only by a careful examination of each of these issues will progress be made towards obtaining an accurate picture of the global ocean-atmosphere heat exchange field. At a time when it is possible to calculate the climate change related signal in the global mean net heat flux to be of order 0.5€W€m−2 from observed variations in ocean heat content, it remains a major problem that it is not possible to reliably close the global mean ocean heat budget to better than 20€W€m−2.
6.5.2 Climate Change Related Trends in Surface Fluxes Both observation and model based analyses of changes in the surface air-sea heat flux associated with increasing global ocean heat content have revealed that the anthropogenic climate signal is small compared to natural variability (Pierce et€al. 2006; Levitus et€al. 2009). Changes in the net surface heat flux over the past 50 years at global and basin scales are expected to be about 0.5€W€m−2 with corresponding changes in the individual heat flux components of less than 2€W€m−2. Lozier et€al. (2008) have examined the spatial pattern of heat-content change in the North Atlantic using historical hydrographic station data from the National Oceanic Data Center World Ocean Database from 1950 to 2000. They find that the total heat gained by the North Atlantic Ocean is equivalent to a basin wide increase in the flux of heat across the ocean surface of 0.4€W€m−2. However, they note that it is not possible to say whether this gain is due to anthropogenic warming because natural variability may be masking this signal. An example of the total net heat flux variability since 1949 from a region in the mid-latitude North Atlantic is given in Fig.€6.7. The figure shows a time series of
Net Heat Flux (W m– 2)
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
173
100 0
–100 1950
1960
1970
1980 Year
1990
2000
2010
Fig. 6.7↜渀 Monthly mean net air-sea heat flux anomaly for the box (40–55°N, 20–40°W) from NCEP/NCAR (↜red), NOC1.1 (↜green), NOC2 (↜blue) and OAFLUX (↜black), units W€m−2
the monthly net heat flux anomaly (i.e. with seasonal cycle removed) averaged over an example box (40–55°N, 20–40°W) in the mid-latitude North Atlantic for each of the four flux datasets. Strong month to month variability is evident in the figure with box averaged anomalies often exceeding 50€W€m−2. Similar variations are observed in each of the datasets for the periods in which they overlap. To some extent this is to be expected as, despite major differences in analysis methods, observations from Voluntary Observing Ships are a primary source of data for each of the flux products considered. The advent of Argo float data has enabled the study of the role of surface heat flux variability in causing interannual variability in ocean heat content in the North Atlantic in recent years (e.g. Hadfield et€al. 2007; Wells et€al. 2009). At decadal timescales, the relative roles of ocean heat transport and surface heat flux variations in North Atlantic temperature variability have been examined from an ocean model perspective by Marsh et€al. (2008) and Grist et€al. (2010). An intensification of the hydrological cycle is also expected as a result of anthropogenic climate change (e.g. IPCC 2007) with regional impacts on E-P as spatial patterns and the relative intensity of the evaporation and precipitation shift. It is worth noting that changes in evaporation imply a corresponding change in the latent heat flux, the two being related by the following simple equation, QE = ρ0 LE
where ρ0 is the fresh water density as a function of temperature. Thus, analysis of changes to the evaporation rate using observational datasets also need to take into account the implied change in latent heat flux and use the value obtained as a check on whether the changes in E are physically plausible. This is particularly important as spurious trends in E have the potential arise from time dependent biases in the wind speed.
6.5.3 Relationship to Major Modes of Atmospheric Variability It is now well recognised that atmospheric variability on a range of timescales may be characterised to a certain extent by various spatial patterns or modes typically expressed in terms of pressure on a given level. These modes have been determined primarily using statistical techniques, such as principal component analysis (Barnston and Livezey 1987) but have also been indexed in some cases via their expres-
174
S. A. Josey
sion in the surface pressure fields as the difference in pressure anomaly (i.e. actual value-long term mean) between two points (e.g. Hurrell 1995). The leading mode in the Atlantic is the North Atlantic Oscillation (NAO), characterised by variations in the pressure difference between the Azores High and Iceland Low. The NAO has been the subject of numerous studies documenting its influence on a range of oceanic, land and atmospheric physical processes, as well as its influence on ecosystems (see the comprehensive review of Hurrell et€al. 2003). Likewise, in the Tropical Pacific the El Nino-Southern Oscillation (ENSO) east-west pattern associated with variations in the strength of the Walker Cell has profound consequences for the ocean and neighbouring land masses. It too has been the subject of intensive research over many decades and received significant attention prior to the discovery of the NAO (Philander 1990). More recently, a north-south variation in the pressure difference between the Southern Ocean and Antarctic landmass has been dubbed the Southern Annular Mode (SAM). Attention here has focused on the strengthening of the SAM index over the past several decades and consequences of the associated southwards displacement of the main westerly wind belt over the Southern Ocean (e.g. Ciasto and Thompson 2008; Böning et€al. 2008). Mode-associated variations in the surface pressure gradient naturally lead to changes in the strength and direction of the wind field, and the source region for the air mass advected over a particular region of ocean (and thus its temperature and humidity characteristics). As discussed previously (Sect.€6.2.2, Eqs.€6.1 and 6.2), the wind speed and near surface air temperature and humidity are the primary variables which establish the strength of the latent and sensible heat loss, hence the leading modes of variability have a clear signature in the surface heat flux (e.g. Josey et€al. 2001 for the NAO). The air temperature and humidity also impact the longwave flux (Eq. 6.6), and the change in air mass characteristics can also lead to variations in cloud amount, thus the modes may also impact on both radiative flux terms. As an example of mode impacts on the wind speed and net surface heat flux, these fields are shown for the two leading modes of variability in the North Atlantic, the aforementioned NAO, and the second mode which is widely termed the East Atlantic Pattern (EAP), in Fig.€6.8. The NAO exhibits the well known north-south dipole in sea level pressure which results in stronger than normal winds from the north-west over the Labrador Sea and heat flux anomalies of up to −80€W€m−2 in this region for a unit positive value of the NAO index. Other notable features include enhanced flow of air from the south-east over the Gulf Steam which additional analysis shows to be anomalously warm, reducing the heat loss in this region. The EAP is characterised by a monopole structure in sea level pressure with lower than normal values in the East Atlantic at about 50–55°N. This gives rise to anomalously strong northerly winds in the mid-high latitude western Atlantic and strong heat loss at 45–50°N. Other features may be identified in both plots, and in general these are consistent with the increase in wind speed and change in air temperature expected from the anomalous wind direction. Note that in addition to the leading mode, there may be a further 3 or 4 modes which can be identified as being of importance for understanding the atmospheric variability and its impacts depending on the region considered.
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
80
Sep-Mar NAO
80 60
70
40
60
80
175 Sep-Mar EAP
80 60
70
40
60
20
20 50
50 0 40
0 40 – 20
– 20 30
– 40
20
10 – 100 – 80 – 60 – 40 – 20
– 60
0
20
40
– 80
30
– 40
20
10 – 100 – 80 – 60 – 40 – 20
– 60
0
20
40
– 80
Fig. 6.8↜渀 Composites of the NCEP/NCAR reanalysis net heat flux (↜coloured field, units W€m−2), sea level pressure (contours, intervals 1€mb, negative values solid, zero and positive values dashed) and wind speed (↜arrows) on winter-centred Climate Prediction Center NAO and EAP values for the period 1958–2006
In addition to the net heat flux, the main modes of variability also have a direct impact on the freshwater flux as the change in the latent heat flux has an equivalent signature in the evaporation field, and variations in the evaporation result in modified precipitation downstream. For example, such variations in E-P associated with the NAO and EAP have been identified by Josey and Marsh (2005) and linked by them to changes in ocean surface salinity. These authors find that much of the multidecadal freshening in the eastern subpolar gyre region of the North Atlantic from the 1960s through to the 1990s can be attributed to a change in the strength of the East Atlantic Pattern (see also Myers et€al. 2007 for an extension of this work to the Labrador Sea). Variations in full depth ocean salinity are more difficult to relate to changes in the surface exchanges and this implies a leading role for advective effects (Boyer et€al. 2007). The combined effects of heat and freshwater flux anomalies lead to mode-related changes in the surface density flux field (via Eq.€6.10). Such changes have their greatest impact in dense water formation regions, for example at high latitudes in the North Atlantic. Here changes in the surface buoyancy loss associated with the NAO have lead to a multidecadal variation in the location of the dominant site for deep water formation from the Greenland Sea to the Labrador Sea as the NAO shifted from a primarily negative state in the 1960s to a positive state in the 1990s (Dickson et al. 1996, 2008). Finally, as regards mode impacts on the surface exchanges, changes in
176
S. A. Josey
the wind field have a direct impact on the wind stress (via Eq. 6.9) and thus the wind driven response of the ocean. See for example Josey et€al. (2002) who include an analysis of variations in the Ekman transport and wind driven upwelling associated with the NAO as part of a wider study of the wind stress forcing of the ocean. The brief discussion of mode impacts on high latitude buoyancy loss in the previous section, opens up a wider area, which will be only briefly touched on here, namely the dominant processes controlling dense water formation. Recent work has focused on both the wind- driven preconditioning for such events in the Nordic Seas (Gamiz-Fortis and Sutton 2007) and the role of heat loss (Grist et€al. 2007, 2008). Gamiz-Fortis and Sutton (2007) find that doming of isopycnals in response to wind stress curl anomalies and the consequent increase in surface density due to upwelling play a role in dense water formation. Grist et€al. (2007, 2008) have studied the impacts of heat flux extrema on Nordic Seas dense water formation and transport through the Denmark Strait in a range of coupled models. They find that heat flux extrema alone are sufficient to trigger new dense water production and find a consistent response across the model considered in terms of the response at the Denmark Strait. An increase in heat loss from −80 to −250 W€m−2 results in a strengthening of the dense water transport through the Strait of 1–2€Sv depending on the model considered. Other processes are also expected to play a significant role in the dense water formation, for example exchanges of water with fresher coastal boundary currents which are strongly influenced by Arctic outflows (for a full overview of this complex region see Dickson et€al. 2008).
6.6â•…Unresolved Issues and Conclusion There are many unresolved issues and areas for future improvement in the field of ocean-atmosphere interaction, including those surrounding the global heat budget closure problem, two particular examples follow.
6.6.1 The Southern Ocean Sampling Problem Observations that can be used to provide surface latent/sensible heat flux estimates are extremely sparse at high latitudes resulting in large uncertainties in the various flux products in the Southern Ocean. A primary factor here is the lack of the combined set of observations (wind speed, air temperature, surface humidity, sea surface temperature) necessary to estimate these flux terms. This is illustrated in Fig.€6.9 which shows all available surface meteorological reports from the COADS dataset with sufficient information to estimate the latent heat flux over the 5 year period from 2000–2004 in January and July. The situation is most severe in winter when we have essentially no information on this key field for assimilation into reanalyses or generation of in situ flux datasets.
Fig. 6.9↜渀 All available surface meteorological reports from the COADS dataset with sufficient information to estimate the latent heat flux over the 5 year period from 2000–2004 in July (↜left panel) and January (↜right panel)
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum 177
178
S. A. Josey 80
60
40
20
0
– 20
– 40
– 60
– 80
Fig. 6.10↜渀 Annual mean net heat flux (units W€m−2) from the ECMWF reanalysis for the period 1979–1993
There is a tendency to think of heat exchange in the Southern Ocean as being relatively uniform in a zonal sense when, at least according to the available reanalysis datasets, there is quite a significant amount of zonally asymmetric structure in the surface forcing. For example, the ECMWF annual mean net heat flux (Fig.€6.10) shows heat loss in the SE Pacific at 50–60°S of −20€W€m−2, which contrasts with a heat gain of 10–40€W€m−2 at the same latitudes in the Atlantic and Indian sectors of the Southern Ocean. How do we go about determining whether this zonal asymmetry is real with existing/future observing systems?
6.6.2 E stimating Meridional Overturning Circulation (MOC) Variability from Surface Fluxes The surface fluxes of heat and freshwater each act to modify the density of the ocean surface layer via their impact on temperature and salinity. Cooling of the ocean surface and net freshwater loss serve to increase the density as they result in
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
179
a reduction in temperature and increase of salinity (the converse holds for ocean warming and freshwater gain). The combined effect of the heat and freshwater exchanges can be expressed in terms of the surface density flux (also referred to as the buoyancy flux). Variations in the density flux at high latitudes have potentially significant implications for European climate as they modify the amount of dense water formed in deep convection regions (Grist et€al. 2007, 2008) and consequently the overturning circulation of the North Atlantic. The impact of the air-sea density flux on the amount of water formed in different density classes can be determined using water mass transformation theory (Walin 1982) and these techniques have been employed in many model studies (e.g. Marsh et€al. 2005). A modification of this method has been recently used to estimate surface forced variability in the North Atlantic overturning circulation (Grist et€al. 2009; Josey et€al. 2009). The method has been shown to provide useful estimates of the MOC variability in the range 35–65°N with the HadCM3 coupled climate model and has been applied using NCEP/NCAR reanalysis flux fields to estimate surface forced variability in the mid-high latitude North Atlantic for the past 45 years. The variability of the MOC at latitude 55°N obtained using this technique is shown in Fig.€6.11. The figure reveals a tendency for an anomalously high overturning circulation, by about 1–2€Sv, from the late 1970s to the late 1990s. This period coincides with the prolonged positive phase of the North Atlantic Oscillation and may indicate that surface forcing associated with this mode plays a significant role in determining the strength of the circulation at this latitude. From 2000 onwards, there is some indication of weakening of the transport which probably reflects natural variability. Further work is planned to refine the method which has the potential to provide valuable complementary information on circulation variability at mid-high latitudes to that obtained from the Rapid mooring array at 26°N.
SFOC Anomaly (Sv)
4
55 N
2 0 –2 –4
1970
1980 Year
1990
2000
Fig. 6.11↜渀 Reconstruction of the maximum surface forced North Atlantic overturning circulation anomaly (units Sverdrup, 1€ Svâ•›=â•›106€ m3s−1) at 55°N using density fluxes determined from the NCEP/NCAR reanalysis. Details of the method are given in Josey et€al. (2009), the different lines are estimates based on surface flux fields integrated over 6 years (↜dash-dot line), 10 years (↜solid line) and 15 years (↜dashed line)
180
S. A. Josey
In conclusion, the main aim of this paper has been to provide an overview of the air-sea fluxes of heat, freshwater and momentum focusing on methods used to determine these fluxes and their role in the wider climate system. The intention being to provide a firm basis for future studies which seek to evaluate the importance of air-sea fluxes for operational oceanography. This is a rapidly developing field as highlighted by the other papers in this volume and at present the relative importance of surface fluxes as opposed to other processes in obtaining short range (i.e. up to 1 week) ocean forecasts is a matter of debate and will depend on the region and particular timescales being considered. It is to be expected that surface fluxes will prove key to obtaining reliable forecasts of, for example, ocean mixed-layer depth or density structure. Significant progress in this area is likely over the next few years and will benefit from evaluations of the accuracy of surface flux datasets (in particular from numerical weather prediction models) being carried out in a wider climate context beyond operational oceanography. Developments in the observing network, in particular the advent of Argo and the increasing number of surface flux reference sites, will enable such evaluations. An exciting recent development has been the deployment, for the first time, in March 2010 of a surface flux buoy in the Southern Ocean (http://imos.org.au/sofs.html). Such deployments in regions previously unsampled with high quality surface flux instrumentation promise major advances in our understanding of air-sea interaction processes and a better picture of how transfers across the ocean-atmosphere interface influence the climate system. Acknowledgements╇ The research summarised here is the result of efforts by a very broad community and I would like to thank the many people with whom I’ve discussed ocean-atmosphere interaction over the years. In particular, I would like to express my gratitude to Peter Taylor for guiding my thinking through much of my research career and to the UK Natural Environment Research Council for funding much of my research activity. In addition, I am grateful for many helpful comments on the manuscript by the anonymous reviewer and by members of the GODAE Summer School, in particular Cynthia Bluteau and Stephanie Downes.
References Adler RF, Huffman GJ, Chang A, Ferraro R, Xie P, Janowiak J, Rudolf B, Schneider U, Curtis S, Bolvin D, Gruber A, Susskind J, Arkin P (2003) The Version 2 Global Precipitation Climatology Project (GPCP) Monthly Precipitation Analysis (1979-Present). J Hydrometeorol 4:1147–1167 Andersson A, Fennig K, Klepp C, Bakan S, Graßl H, Schulz J (2010) The Hamburg ocean atmosphere parameters and fluxes from satellite data–HOAPS-3. Earth Syst Sci Data Discuss 3:143–194. doi:10.5194/essdd-3-143-2010 Barnston AG, Livezey RE (1987) Classification, seasonality and persistence of low-frequency atmospheric circulation patterns. Mon Wea Rev 115:1083–1126 Berry DI, Kent EC (2009) A new air-sea interaction gridded dataset from ICOADS with uncertainty estimates. Bull Am Meteor Soc 90:645–656. doi:10.1175/2008BAMS2639.1 Bignami F, Marullo S, Santoleri R, Schiano ME (1995) Longwave radiation budget in the Mediterranean Sea. J Geophys Res 100(C2):2501–2514
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
181
Böning CW, Dispert A, Visbeck M, Rintoul SR, Schwarzkopf FU (2008) The response of the Antarctic circumpolar current to recent climate change. Nat Geosci 1:864–869 Boyer T, Levitus S, Antonov J, Locarnini R, Mishonov A, Garcia H, Josey SA (2007) Changes in freshwater content in the North Atlantic Ocean 1955–2006, Geophys Res Lett 34(16):L16603. doi:10.1029/2007GL030126 Bunker AF (1976) Computations of surface energy flux and annual air-sea interaction cycles of the North Atlantic Ocean. Mon Wea Rev 104:1122–1140 Ciasto LM, Thompson DWJ (2008) Observations of large-scale ocean-atmosphere interaction in the Southern Hemisphere. J Clim 21(6):1244–1259 Clark NE, Eber L, Laurs RM, Renner JA, Saur JFT (1974) Heat exchange between ocean and atmosphere in the eastern North Pacific for 1961–71. NOAA Tech Rep NMFS SSRF-682, U.S. Department of Commerce, Washington, p€108 da Silva AM, Young CC, Levitus S (1994) Anomalies of directly observed quantities, vol€2. Atlas of Surface Marine Data, NOAA Atlas NESDIS 2, p€419 Dickson R, Lazier J, Meincke J, Rhines P (1996) Long-term coordinated changes in the convective activity of the North Atlantic. In: Willebrand DAJ (ed) Decadal climate variability: dynamics and predictability. Springer, Berlin, p€211–262 Dickson R, Hansen B, Rhines P (eds) (2008) Arctic-Subarctic Ocean Fluxes (ASOF). Springer, Dordrecht Fairall CW, Bradley EF, Hare JE, Grachev AA, Edson JB (2003) Bulk parameterization of air-sea fluxes: updates and verification for the COARE algorithm. J Clim 16:571–591 Gamiz-Fortis SR, Sutton RT (2007) Quasi-periodic fluctuations in the Greenland-Iceland-Norwegian Seas region in a coupled climate model. Ocean Dyn 57:541–557 Gilman C, Garrett C (1994) Heat flux parameterizations for the Mediterranean Sea: the role of atmospheric aerosols and constraints from the water budget. J Geophys Res 99:5119–5134 Grist JP, Josey SA (2003) Inverse analysis adjustment of the SOC air–sea flux climatology using ocean heat transport constraints. J Clim 16:3274–3295 Grist JP, Josey SA, Sinha B (2007) Impact on the ocean of extreme Greenland sea heat loss in the HadCM3 coupled ocean-atmosphere model. J Geophys Res 112:C04014. doi:10.1029/2006JC003629 Grist JP, Josey SA, Sinha B, Blaker AT (2008) Response of the Denmark strait overflow to nordic seas heat loss. J Geophys Res 113:C09019. doi:10.1029/2007JC004625 Grist JP, Marsh RA, Josey SA (2009) On the relationship between the North Atlantic meridional overturning circulation and the surface-forced overturning stream function. J Clim 22(19):4989–5002. doi:10.1175/2009JCLI2574.1 Grist JP, Josey SA, Marsh R, Good SA, Coward AC, deCuevas BA, Alderson SG, New AL, Madec G (2010) The roles of surface heat flux and ocean heat transport convergence in determining Atlantic Ocean temperature variability. Ocean Dyn 60(4):771–790. doi: 10.1007/s10236-010-0292-4 Gulev S, Josey SA, Bourassa M, Breivik L-A, Cronin MF, Fairall C, Gille S, Kent EC, Lee CM, McPhaden MJ, Monteiro PMS, Schuster U, Smith SR, Trenberth KE, Wallace D, Woodruff SD (2010) Surface energy and CO2 fluxes in the Global Ocean-Atmosphere-Ice System. Plenary White Paper in Proceedings of the “OceanObs’09: Sustained Ocean Observations and Information for Society” Conference. ESA Publication WPP-306, Venice, Italy, 21–25 Sept 2009 Hadfield RE, Wells NC, Josey SA, JJ-M Hirschi (2007) On the accuracy of North Atlantic temperature and heat storage fields from Argo. J Geophys Res 112:C01009. doi:10.1029/2006JC003825 Harrison DE (1989) On climatological monthly mean wind stress and wind stress curl fields over the World Ocean. J Clim 2(1):57–70 Hellerman S, Rosenstein M (1983) Normal monthly wind stress over the World Ocean with error estimates. J Phys Oceanogr 13:1093–1104 Hurrell JW (1995) Decadal trends in the North Atlantic oscillation regional temperatures and precipitation. Science 269:676–679 Hurrell JW, Kushnir Y, Visbeck M, Ottersen G (2003) An overview of the North Atlantic oscillation. In: Hurrell JW, Kushnir Y, Ottersen G, Visbeck M (Eds) The North Atlantic oscillation:
182
S. A. Josey
climate significance and environmental impact. Geophysical Monograph Series. American Geophysical Union, Washington D.C., p€134 IPCC (2007) Climate change 2007: the physical science basis. Contribution of Working Group I to the 4th assessment report of the inter-governmental panel on climate change. Cambridge University Press, Cambridge Isemer H-J, Willebrand J, Hasse L (1989) Fine adjustment of large scale air-sea energy flux parameterizations by direct estimates of ocean heat transport. J Clim 2:1173–1184 Josey SA (2003) Changes in the heat and freshwater forcing of the eastern Mediterranean and their influence on deep water formation. J Geophys Res 108(C7):3237. doi:10.1029/2003JC001778 Josey SA, Arimoto N (1992) The colour gradient in M31: evidence for disc formation by biased infall? Astron Astrophys 255:105 Josey SA, Tayler RJ (1991) The oxygen yield and infall history of the solar neighbourhood. Mon Not R Astron Soc 251:474 Josey SA, Marsh R (2005) Surface freshwater flux variability and recent freshening of the North Atlantic in the eastern Subpolar Gyre. J Geophys Res 110:C05008. doi:10.1029/2004JC002521 Josey SA, Smith SR (2006) Guidelines for evaluation of Air-Sea heat, freshwater and momentum flux datasets, CLIVAR Global Synthesis and Observations Panel (GSOP) White Paper, July 2006, pp€14. http://www.clivar.org/organization/gsop/docs/gsopfg.pdf Josey SA, Kent EC, Taylor PK (1995) Seasonal variations between sampling and classical mean turbulent heat flux estimates in the eastern North Atlantic. Annal Geophys 13:1054–1064 Josey SA, Oakley D, Pascal RW (1997) On estimating the atmospheric longwave flux at the ocean surface from ship meteorological reports. J Geophys Res 102(C13):27,961–27,972 Josey SA, Kent EC, Taylor PK (1998) The Southampton Oceanography Centre (SOC) oceanatmosphere heat, momentum and freshwater flux atlas. Southampton Oceanography Centre Report No. 6, Southampton, UK, p€30 Josey SA, Kent EC, Taylor PK (1999) New insights into the ocean heat budget closure problem from analysis of the SOC air–sea flux climatology. J Clim 12:2856–2880 Josey SA, Kent EC, Sinha B (2001) Can a state of the art atmospheric general circulation model reproduce recent NAO related variability at the Air-Sea interface? Geophys Res Lett 28(24):4543–4546 Josey SA, Kent EC, Taylor PK (2002) On the wind stress forcing of the ocean in the SOC climatology: comparisons with the NCEP/NCAR, ECMWF, UWM/COADS and Hellerman and Rosenstein datasets. J Phys Oceanogr 32(7):1993–2019 Josey SA, Pascal RW, Taylor PK, Yelland MJ (2003) A new formula for determining the atmospheric longwave flux at the ocean surface at mid-high latitudes. J Geophys Res 108(C4). doi:10.1029/2002JC001418 Josey SA, Grist JP, Marsh RA (2009) Estimates of meridional overturning circulation variability in the North Atlantic from surface density flux fields. J Geophys Res—Oceans. 114:C09022. doi:10.1029/2008JC005230 Kent EC, Berry DI (2005) Quantifying random measurement errors in voluntary observing ships’ meteorological observations. Int J Climatol 25(7):843–856. doi:10.1002/joc.1167 Large W, Yeager S (2009) The global climatology of an interannually varying air-sea flux data set. Clim Dynamics. doi:10.1007/s00382-008-0441-3 Levitus S, Antonov JI, Boyer TP, Locarnini RA, Garcia HE, Mishonov VA (2009) Global ocean heat content 1955–2008 in light of recently revealed instrumentation problems. Geophys Res Lett 36:L07608. doi:10.1029/2008GL037155 Lozier MS, Leadbetter S, Williams RG, Roussenov V, Reed MSC, Moore NJ (2008) The spatial pattern and mechanisms of heat-content change in the North Atlantic. Science 319(5864):800– 803. doi:10.1126/science.1146436 Marsh R, Josey SA, Nurser AJG, de Cuevas BA, Coward AC (2005) Water mass transformation in the North Atlantic over 1985–2002 simulated in an eddy-permitting model. Ocean Sci 1:127–144 Marsh R, Josey S, de Cuevas B, Redbourn L, Quartly G (2008) Mechanisms for recent warming of the North Atlantic: insights with an eddy-permitting model. J Geophys Res 113:C04031
6â•… Air-Sea Fluxes of Heat, Freshwater and Momentum
183
Myers P, Josey S, Wheler B, Kulan N (2007) Interdecadal variability in labrador sea precipitation minus evaporation and salinity. Prog Oceanogr 73(3–4):341–357 Pascal RW, Josey SA (2000) Accurate radiometric measurement of the atmospheric longwave flux at the sea surface. J Atmos Oceanic Technol 17(9):1271–1282 Philander SGH (1990) El Nino, La Nina at the Southern Oscillation. Academic Press, San Diego Pierce DW, Barnett TP, AchutaRao KM, Gleckler PJ, Gregory JM, Washington WM (2006) Anthropogenic warming of the oceans: observations and model results. J Clim 19(10):1873–1900 Pinker RT, Wang H, Grodsky1 SA (2009) How good are ocean buoy observations of radiative fluxes? Geophys Res Lett 36:L10811. doi:10.1029/2009GL037840 Reed RK (1977) On estimating insolation over the ocean. J Phys Oceanogr 7:482–485 Renfrew IA, Moore GWK, Guest PS, Bumke K (2002) A comparison of surface-layer and surface turbulent-flux observations over the Labrador Sea with ECMWF analyses and NCEP reanalyses. J Phys Oceanogr 32:383–400 Rhines PB, Hakkinen S, Josey SA (2008) Is oceanic heat transport significant in the climate system? In: Dickson R, Hansen B, Rhines P (eds) Arctic-Subarctic Ocean Fluxes. Springer, Berlin, p.€87–110 Send U, Weller R, Wallace D, Chavez F, Lampitt R, Dickey T, Honda M, Nittis K, Lukas R, McPhaden M, Feely R (2009) OceanSITES. Community White Paper, Oceanobs’09 Smith SD (1988) Coefficients for sea surface wind stress, heat flux and wind profiles as a function of wind speed and temperature. J Geophys Res 93:15,467–15,474 Smith S et al (2010) The data management system for the shipboard automated meteorological and oceanographic system (SAMOS) initiative. Community White Paper in proceedings of the “OceanObs’09: Sustained Ocean Observations and Information for Society” Conference. ESA Publication WPP-306, Venice, Italy, 21–25 Sept 2009 Stammer D, Ueyoshi K, Köhl A, Large WB, Josey S, Wunsch C (2004) Estimating air-sea fluxes of heat, freshwater and momentum through global ocean data assimilation. J Geophys Res 109:C05023. doi:10.1029/2003JC002082 Stott PA, Sutton RT, Smith DM (2008) Detection and attribution of Atlantic salinity changes. Geophys Res Lett 35:L21702. doi:10.1029/2008GL035874 Taylor PK, Yelland MJ (2001) The dependence of sea surface roughness on the height and steepness of the waves. J Phys Oceanog 31:572–590 Taylor PK, Bradley EF, Fairall CW, Legler L, Schulz J, Weller RA, White GH (2001) Surface fluxes and surface reference sites. In: Koblinsky CJ, Smith NR (eds) Observing the Oceans in the 21st Century. GODAE Project Office/Bureau of Meteorology, Melbourne, p€177–197 Tomita H, Kubota M, Cronin MF, Iwasaki S, Konda M, Ichikawa H (2010) An assessment of surface heat fluxes from J-OFURO2 at the KEO/JKEO sites. J Geophys Res-Oceans 115:13 Trenberth KE, Caron JM (2001) Estimates of meridional atmosphere and ocean heat transports. J Clim 14:3433–3443 Trenberth KE, Dole R, Xue Y, Onogi K, Dee D, Balmaseda M, Bosilovich M, Schubert S, Large W (2009) Atmospheric reanalyses: a major resource for ocean product development and modeling. Community White Paper, Oceanobs’09 Walin G (1982) On the relation between sea-surface heat flow and thermal circulation in the ocean. Tellus 34:187–195 Weller RA, Bradley EF, Edson JB, Fairall CW, Brooks I, Yelland MJ, Pascal RW (2008) Sensors for physical fluxes at the sea surface: energy, heat, water, salt. Ocean Sci 4:247–263 Wells NC, Josey SA, Hadfield RE (2009) Towards closure of regional heat budgets in the North Atlantic using Argo floats and surface flux datasets Ocean Sci 59–72. SRef-ID:1812-0792/ os/2009-5-59 WGASF (2000) Intercomparison and validation of ocean-atmosphere energy flux fields—Final report of the Joint WCRP/SCOR Working Group on Air–Sea Fluxes(WGASF) In: Taylor PK (ed) WCRP-112, WMO/TD-1036, World Climate Research Programme, Geneva. p€306 Woodruff SD, Slutz RJ, Jenne RL, Steurer PM (1987) A comprehensive ocean-atmosphere data set. Bull Am Meteor Soc 68:1239–1250
184
S. A. Josey
Worley SJ, Woodruff SD, Reynolds RW, Lubker SJ, Lott N (2005) ICOADS release 2.1 data and products. Int J Climatol 25:823–842 Worley SJ, Woodruff SD, Lubker SJ, Ji Z, Freeman JE, Kent EC, Brohan P, Berry DI, Smith SR, Wilkinson C, Reynolds RW (2009) The role of ICOADS in the sustained ocean Observing System. Community White Paper, Oceanobs’09 Yu L, Weller RA (2007) Objectively analyzed air-sea flux fields for the global ice-free oceans (1981–2005). Bull Am Meteor Soc 88:527–539
Chapter 7
Coastal Tide Gauge Observations: Dynamic Processes Present in the Fremantle Record Charitha Pattiaratchi
Abstract╇ Coastal sea level variability occurs over timescales ranging from hours to centuries. Globally, the astronomical forces of the Sun and the Moon are the dominant forcing which results in the tidal variability with periods of 12 and 24€h. In many regions, the effects of the tides dominate the water level variability – however, in regions where the tidal effects are small other processes also become important in determining the local water level. In this paper, sea level data from Fremantle (tidal range ~0.5€m), which has one of the longest time series records in the southern hemisphere, and other sea level recoding stations from Western Australia are presented to highlight the different processes ranging from seiches, tsunamis, tides, storm surges, continental shelf waves, annual and inter-annual variability. As the contribution from each of these processes is of the same order of magnitude – the study of sea level variability in the region is very interesting and reveals both local and remote forcing.
7.1╅Introduction Coastal regions experience rise and fall of sea level which vary at timescales of hours, days, weeks, months, annually and so on, governed by the astronomical tides, meteorological conditions, local bathymetry and a host of other factors. An overview of these processes may be found in Pugh (1987, 2004) and Boon (2004). Globally, the astronomical forces of the Sun and the Moon are the dominant forcing which results in the tidal variability with periods of 12 and 24€h. In many regions, the effects of these tides dominate the water level variability; however, in regions where the tidal effects are small other processes become important in determining the local water level. In this paper, sea level data from Fremantle (Fig.€7.1) which C. Pattiaratchi () School of Environmental Systems Engineering & UWA Oceans Institute, The University of Western Australia, 35 Stirling Highway, Crawley, WA 6009, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_7, ©Â€Springer Science+Business Media B.V. 2011
185
186
C. Pattiaratchi
Fig. 7.1↜渀 Location of tide gauges used in the present study and the track of the tropical cyclone Frank
has one of the longest time series records in the southern hemisphere, are presented to highlight the different processes ranging from seiches, tsunamis, tides, storm surges, continental shelf waves, annual and inter-annual variability (Table€7.1). It should be noted that there are some processes which are not inherent in the Fremantle record but may be present in other tide gauge records are not included in this paper. These include storm surges (due to local changes in atmospheric pressure and winds), sea level changes due to ocean eddy interactions with the coast and wave set-up. In Fremantle, it is difficult to separate the surge effects due to local and remote forcing (Eliot and Pattiaratchi 2010) and thus are included in the section ‘continental shelf waves’. The auto spectrum of water levels recorded at Fremantle over three years indicated several peaks, ranging from hours to seasonal timescales Table 7.1↜渀 Decomposition of processes observable at Freemantle tide gauge and their approximate amplitudes Process Duration Scale (m) Reference Wave action 2–20€s ~5 Lemm et€al. (1999) Wave set-up 5–30€min ~0.3 Bode and Hardy (1997) Seiches 30–90€min ~0.2 Ilich (2006) ~0.2 Reid (1990) Pressure surge 1–3€h ~0.2 Pugh (1987) Wind set-up 3–6€h ~0.8 Easton (1970) Tidal conditions 12–24€h * Pattiaratchi et€al. (1997) Sea breezes 24€h Pressure systems (cycle) 1–10 days ~0.8 Hamon (1966) Continental shelf waves 3–10 days ~0.6 Fandry et€al. (1984) Pattiaratchi and Buchan (1991) Oceanic currents Seasonal ~0.3 Nodal tide 18.6 years ~0.15 Pugh (1987) Climate variability Decades ~0.2 Pariwono et€al. (1986) Climate change 103+ years ~10 Wyrwoll et€al. (1995)
7â•… Coastal Tide Gauge Observations
187
Weather Band
109
Tides
Spectral Density (m2 / Hz)
108
Seiches
107
Ocean Currents
106 105
20d 20m
5d 104 103
1 year
24h
102
160m 12h
10–8
10–7
10–6 10–5 Frequency (Hz)
60m 10–4
10–3
Fig. 7.2↜渀 Spectra of water levels at Fremantle showing the different scales of variability
reflecting these processes (Fig.€7.2). The contribution from each of these processes, which includes both direct and remote forcing to the total sea level variability is of the same order of magnitude and thus is equally important. Sea level variability is important for a range of activities including navigation, coastal stability and coastal planning. The significance of coastal sea level change for coastal management has been recognised, effective for both gradual change and intermittent fluctuations (Komar and Enfield 1987; Allan et€al. 2003). In order to interpret historic patterns of coastal management and predict possible future needs, it is necessary to document both short and long-term trends and fluctuations of sea level.
7.1.1 The Study Region Fremantle is located along the western-coast of Australia at latitude 32°S (Fig.€7.1). Weather systems impacting on the region are dominated by anti-cyclonic high-pressure systems with periodic tropic and mid-latitude depressions and local seasonal sea-breezes (Eliot and Clarke 1986). Anticyclones move to the east and pass the coast every 3–10 days (Gentilli 1972). The peak occurrence of mid-latitude depressions is in July and the strongest winds in the system are the north-westerlies (Gentilli 1972; Lemm et€al. 1999). Tropical cyclones track down from the Northwest
188
C. Pattiaratchi
coast infrequently during late summer and can have significant impact on the coastline (Eliot and Clarke 1986). The seasonal movement of the high-pressure systems results in a strong seasonality in the wind regime. During the summer southerly winds prevail whilst in winter there is no dominant wind direction although the strongest winds are north-westerly during the passage of frontal systems. Sea breezes, which are stronger during the summer dominate the coastal region with offshore (westward) winds in the morning and strong (up to 15€ms−1) shore parallel sea breezes commencing around noon and weakening during the night (Pattiaratchi et€al. 1997; Masselink and Pattiaratchi 2001). During winter, the region is subject to the passage of mid-latitude depressions and associated frontal systems, and ~30 storm wave events are experienced (Lemm et€al. 1999). During the passage of a frontal system, the region is subject to strong winds (up to 25–30€ms–1) from the north through west, which rapidly change direction towards west through southwest then progressively more southerly over 12–16€h. South to south-westerly winds gradually weakens over two to three days, and calm, cloud-free conditions prevail for another three to five days prior to the passage of another frontal system.
7.2╅Data Data presented here was recorded at the long-term tide station located at Fremantle and maintained by the WA Department for Planning and Infrastructure. The sampling intervals vary between 2€min and 1€h. In addition, monthly mean data from the same gauge was obtained from the Permanent Service for Mean Sea Level, located at Proudman Oceanographic Laboratory, Liverpool, UK (www.pol.ac.uk/psmsl/).
7.3â•…Seiches A free oscillation in an enclosed or semi-enclosed body of water, similar to the oscillation of a pendulum where the oscillation continues after the initial force has stopped, is defined as a seiche (Miles 1974). Several factors cause the initial displacement of water from a level surface, and the restoring force is gravity, which tends to maintain a level surface. Once formed, the oscillations are characteristic only of the system’s geometry (length and depth) and may persist for many cycles before decaying under the influence of friction or energy leakage. A simplest model of a continental shelf seiche is a standing wave with an antinode at the shoreline and a node at the shelf edge. The period of the seiche is given by four times the travel time from the coast to the shelf-edge. For a mean water depth h, shallow water wave theory gives (Pugh 1987):
Tn =
4L 1 √ (2n − 1) gh
(7.1)
7â•… Coastal Tide Gauge Observations
189
Water Level (cm)
Wind Stress (Pa)
Here, n is the number of nodes (nâ•›=â•›1 is the fundamental mode and is also the most common; L is the width of the continental shelf; g is acceleration due to gravity and h is the mean water depth. In the auto-spectrum (Fig.€7.2), 3 seiche periods were identified at Fremantle: 2.8€h, 1€h and 20€min. The seiches have amplitudes between 10 and 40€cm and contained 40–70% energy relative to the main 24€h diurnal tidal oscillation. Ilich (2006) found that the maximum amplitudes of the 2.8€h and 20€min seiches to be ~45€cm and ~12€cm respectively although the 45€cm seiche amplitude could be due to the superposition of all three seiches. Width of the continental shelf off Fremantle is ~50€km whilst the mean shelf depth is ~50€m which yields from Eq.€(7.1), a period of ~2.5€h which is close to the observed value of 2.8€h. Ilich (2006) found that changes in direction of the wind stress initiate seiching (Fig.€7.3). In particular: (a) strong wind events onshore or offshore components initiate seiching at the 1€h and to a lesser extent, the 20€min seiche; (b) strong southerly (shore-parallel) events rarely cause excitation; (c) sea breeze patterns occurring for more than two days decrease spectral energy of the entire spectrum. Continental shelf seiches are also generated by tsunamis (Pattiaratchi and Wijeratne 2009) and are discussed in Sect.€7.4.
0.5 0 –0.5 a 320
325
E (-ve) / W (+ve)
330
335
150 100 50
b
0 320 –7
Frequency (In(Hz))
N (-ve) / S (+ve)
–7.5 –8
325 330 Energy plot of a spectrum of frequencies over time series WL from FBH 2001
335
25mins
1hr
–8.5 –9 –9.5
2.8hr
c
–10 320
325
Time (day of the year)
330
335
Fig. 7.3↜渀 Time series of a wind stress; b water level; and, c time-frequency diagram for a 15 day period in November 2001 showing that onshore winds (-ve easterly) initiates seiching. (From Ilich 2006)
190
C. Pattiaratchi
7.4â•…Tsunamis
Residual Water Level (cm)
The Indian Ocean region experienced its most devastating natural disaster through the action of a tsunami, resulting from an earthquake off the coast of Sumatra, on 26th of December 2004. This was followed by tsunamis in March 2005, June 2006 and July 2007 and tide gauges in Western Australia recorded sea level oscillations related to all 4 tsunamis but did not result in large scale property damage (Pattiaratchi and Wijeratne 2009). The tide gauge data along the west coast indicated that the tsunami waves incident at Geraldton (0720), Carnarvon (0740) and Fremantle (0740). The initial waves all indicated an increase in the water level, corresponding to leading elevation waves, and the heights along the west coast ranged from 0.33€m at Fremantle to 1.650€m at Geraldton (Fig.€7.4 and Table€7.2). Examination of the residual time series, maximum wave heights, and the elapsed time between the initial and maximum waves indicated that: (1) the maximum wave heights recorded at Carnarvon, Geraldton, and Fremantle (Table€7.2), all exceeded the mean spring tidal range at these locations; (2) at Geraldton, although initial oscillations due to the tsunami waves were observed at 0720 UTC, there was a lag of
Carnarvon 200
No Data
Geraldton
100 0 26
Fremantle
26.5
27 27.5 28 Local Time (Days in December 2004)
28.5
29
Fig. 7.4↜渀 Time series of residual sea level from coastal stations located along the west coast of Australia. The dashed line shows the time of the earthquake. (Note: local time is +8€h UTC)
Table 7.2↜渀 Characteristics of the 26 December 2004 tsunami as recorded by tide gauges Station Initial wave Maximum wave Arrival time/date Wave heighta Elapsed time Wave height (UTC) (number) Carnarvon 07:40 26/12/04 0.38€m 15€h 20€m (25) 1.14€m 15€h 15€m (19) 1.65€m Geraldton 07:20 26/12/04 0.13€m Fremantle 07:40 26/12/04 0.33€m 7€h 20€m (9) 0.60€m a Maximum wave height is listed as the trough to crest height
7â•… Coastal Tide Gauge Observations
191
five hours before the highest water level (2.6€m relative to datum) was reached at 1210 GMT, which coincided with the tidal high water (Fig.€7.4). However, the highest waves (trough to crest) were recorded ~10€h later and were associated with a wave group (see Fig.€7.4 below). The water levels recorded at Geraldton during this event were the highest and lowest levels recorded at this station, which has been in continuous operation for more than 40 years; (3) The residual time series indicated the arrival of a group of waves with higher wave heights at Geraldton some 13–15€h after the arrival of the initial wave (Table€7.2) suggests a reflected wave from the island of Madagascar or the Mascarene ridge (Pattiaratchi and Wijeratne 2009); and, (4) the tsunami set-up seiching along the continental shelf with periods of 4 and 2.7€h at Geraldton and Fremantle, respectively (Fig.€7.4). These periods were the same as those excited by the meteorological effects (Sect.€7.3).
7.5â•…Tides Periodic movements which are directly related in amplitude and phase to some periodic geophysical force are defined a tides and astronomic tides are the most widely recognised phenomena affecting water levels (Pugh 1987). These tides are the harmonic fluctuations of water level developed through the gravitational attraction from astronomic bodies (mainly the sun and moon). In majority of the world’s coastlines there are two tidal cycles per day (i.e. two high and low waters per day and these are termed semi-diurnal (twice-daily) tides with a tidal period of 12.24€h. In a few locations (e.g. Gulf of Mexico, Gulf of Thailand), there is only one high and low waters per day and are known as diurnal (daily) tides and have periods ~24€h. Spring tides are periods of increased tidal range and occur when the Earth, the Sun, and the Moon are along the same axis such that the gravitational forces of the Moon and the Sun both contribute to the tides. Spring tides occur immediately after full moon and the new moon. Neap tides periods of low tidal range which occur when the gravitational forces of the Moon and the Sun are perpendicular to one another (with respect to the Earth). Neap tides occur during 1st and 3rd quarter of the moons. As the tide generating forces are related to the periodic gravitational forces of the Moon and the Sun, there are specific periods which may be identified from the equilibrium tide (Pugh 1987). For example, these include the 12.24€h and the 24€h for the main semi-diurnal and diurnal periods; the lunar month (29.5 days); the period between two successive full or new moons; the annual cycle due to the changes in the earth’s orbit around the Sun. In the longer term, changes in the orbits of the moon and the sun provide 4.45 year and 18.6 year variations in the tides and these are discussed in Sect.€7.9. The dynamic theory of tides which governs the tidal characteristics of the ocean basins considers the configuration of the ocean basins (width, length, and depth),
192
C. Pattiaratchi
frictional forces, Coriolis force, convergence and resonance, and many other variables (Boon 2004). As a result, tides are considered as a series of Amphidromic systems consisting of rotating (Kelvin) waves which rotate around a point where the amplitude of the tide is zero, defined as an Amphidromic point. Due to the influence of the Coriolis force, the Amphidromic systems rotate clockwise in the southern hemisphere and counterclockwise in the northern hemisphere. Close to an Amphidromic point the tidal range is zero and the range increases away from the point (Boon 2004). The tides at Fremantle, which generally are representative of the tides experienced along south-western Australia, are classified as diurnal (Ranasinghe and Pattiaratchi 2000). This is due to the location of a semi-diurnal amphidromic system close to the coast and the diurnal amphidromic system located off the coast of South Africa. The four largest tidal constituents (Pugh 1987) are associated with diurnal and semi-diurnal effects of the sun and moon (Fig.€7.5; Table€7.3). Along south-western Australia, the tide’s diurnal component has a range of 0.6€m, and the semidiurnal tide has a range of only 0.2€m. The semidiurnal tidal range is related to the lunar cycle, with the maximum tidal range occurring close
Sea Level (m)
2 1.5 1 0.5
a
0 1996
1996.1
2
1996.2
1996.3
Solstice
4
1996.4
1996.5
1996.6
Equnox
1996.7
1996.8
Solstice
1996.9
1997
Equnox
Period (hours)
8
Semi-diurnal
16
Diurnal
32 64 128 1996
b
1996.1
1996.2
1996.3
1996.4 1996.5 Time (year)
1996.6
1996.7
1996.8
1996.9
1997
Fig. 7.5↜渀 a Time series of water level record at Fremantle for 1996 and b Morlet wavelet analysis of Fremantle tide record showing the alignment of diurnal and semi-diurnal energy during the equinox and out of phase during the solstice Table 7.3↜渀 Principal tidal constituents for Fremantle Constituent Amplitude Period 0.165€m 23.93€h K1 O1 0.118€m 25.82€h M2 0.052€m 12.42€h 0.047€m 12.00€h S2
Description Principal Lunar Diurnal Principal Solar Diurnal Principal Lunar Semi-diurnal Principal Solar Semi-diurnal
7â•… Coastal Tide Gauge Observations
193
to the full and new moons, and minimum tidal ranges occurring close to the lunar cycle’s first and last quarter’s—the spring–neap cycle (see above). Diurnal tides are related to the declination angle of the moon’s orbital plane. Therefore, the terminology of spring and neap tides is inaccurate in diurnal systems and are defined as tropic and equatorial tides. For tropic tides (analogous to spring tides for semidiurnal systems) the tidal range is a maximum when the declination of the moon is a maximum north or south of the equator. For equatorial tides (analogous to neap tides for semi-diurnal systems) the moon is directly above the equator resulting in a low tidal range. The diurnal and semidiurnal tides oscillate at a frequency of 13.63 and 14.77 days, respectively. This phase difference, of 1.14 days, between the two tidal signals modulates the resultant tide over an annual cycle, causing the diurnal and semidiurnal tides that are in phase during the solstice (resulting in a maximum tropic tidal range) and out of phase at the equinox (resulting in a minimum tropic tidal range). This process is illustrated in Figs.€7.5 and 7.6. This means the highest tropic tidal range does not always correspond with the full/new moon cycle with the daily tidal range varying biannually, with solstice tidal peaks (December–January and June–July) producing a tidal range that is about 20% higher than during equinoctial troughs (February–March and September–October).
Fig. 7.6↜渀 a Diurnal and semi-diurnal components of the tide at Fremantle from day 273 (October 1) to day 365 (December 31) in 2001; b water level from the summation of the diurnal and semidiurnal constituents. The moon phases are shown at the top of the Figure. (From O’Callaghan et€al. 2010)
194
C. Pattiaratchi
During the solstice, when the diurnal and semidiurnal tides are in phase, the maximum tidal range corresponds with the full/new moon cycle; during the equinox, the maximum tidal range does not correspond with the full/new moon cycle. Mixed tides occur during equatorial tides closest to the equinox, with two high and low waters commonly observed over a tidal cycle. Hence, in a diurnal tidal system, such as along south-west Australia, definitions such as spring and neap tides do not always relate to phases of the moon, as is the case for semidiurnal tides. Another consequence of the diurnal tides is the seasonal change in the times of high/low water. During the summer, along the south-west Australian coast, low water generally occurs between 4 a.m. and 12 p.m., depending on the phase of the moon, with high water in the evening. As summer progresses, the low water occurs earlier; as winter starts, the low water occurs later at night, becoming progressively earlier in the evening (with high water occurring in the morning).
7.6â•…Coastal-Trapped Waves The power spectra of sea level (Fig.€7.2) indicates a broad peak in energy in the ‘weather’ band (5–20 days) and these are generally due to atmospheric effects. Closer examination and comparison of the tidal residuals with local meteorological data revealed that a number of significant tidal residuals that were not fully explained by local synoptic conditions but was a combination of locally generated and remotely generated signals, the former through local changes in atmospheric pressure and local wind. The remote signal is characteristic of a long period coastally trapped shelf wave, travelling anti-clockwise relative to the Australian coast. A coastally trapped wave is defined as a wave that travels parallel to the coast, with maximum amplitude at the coast and decreasing offshore. Examples of these waves include continental shelf waves (CSWs) and internal Kelvin waves (Le Blond and Mysak 1978), which are governed through vorticity conservation (Huyer 1990). Coastally trapped waves need a shallowing interface and may develop a range of modes according to the shelf structure (Tang and Grimshaw 1995). They travel with the coast to the left (right) in the southern (northern) hemisphere. Along the Australian coast, shelf waves propagate anti-clockwise relative to the landmass. The governing equations (neglecting advection and friction) are (Huyer 1990):
∂u dη = −g + fv ∂t dx
(7.2)
∂v dη = −g − fu ∂t dy
(7.3)
where, u and v are the velocities in the x (east) and y (north) directions; η is the displacement of the sea surface and f is the Coriolis parameter. The solutions for
7â•… Coastal Tide Gauge Observations
195
Eqs.€7.2 and 7.3 (together with the continuity equation and appropriate boundary conditions), along a boundary oriented east-west are given by (Huyer 1990):
√ gh
η = ηo e−fy/
cos (kx − ωt)
(7.4)
where, ηo is the maximum amplitude at the shoreline, h is the water depth, k and are the wave number and frequency, respectively. This is an equation of a Kelvin wave, propagating along the coastal boundary, with the wave signal reducing in amplitude exponentially with distance offshore. Continental shelf waves (CSWs) depend on only the cross-shelf bathymetry profile and the vertical density profile controls the structure of an internal Kelvin wave (Huyer 1990). The alongshore component of wind stress usually generates CSWs, which are active along the Western Australian coast, first reported by Hamon (1966). Provis and Radok (1979) demonstrated that these waves propagate anti-clockwise along the south coast of the Australian continent over a maximum distance of 4000€km at speeds of 5–7€ms−1 (see also Eliot and Pattiaratchi 2010). Along the west Australian coastline, the continental shelf waves are generated through the passage of mid-latitude low-pressure systems and tropical cyclones. The continental shelf waves can be identified from the sea level records by low-pass filtering (i.e. removal of the tidal component). An example is shown on Fig.€7.7 for tidal records from Geraldton, Fremantle and Albany (Fig.€7.1). Several CSWs with amplitudes ranging from 0.1 to 0.5€m can be identified. For example, between days 290 and 295, an increase of ~0.5€m in the sub-tidal water level was observed at Geraldton. The same variation in water level signal was seen at Fremantle and Albany,
Fig. 7.7↜渀 Low-frequency water levels at a Geraldton, b Fremantle, and c Albany for days 275–365 in 2001 showing the presence of continental shelf waves. (From O’Callaghan et€al. 2007)
196
C. Pattiaratchi
and could be attributed to the passage of a CSW. The correlation coefficients between sub-tidal water levels at these three locations were all greater than 0.8, despite observations being several hundred kilometers apart. The propagation time of the CSW between Geraldton and Fremantle was 23€h, and between Fremantle and Albany it was 17€h, yielding a mean propagation speed of ~500€km day–1 (~6€ms−1). The period of the continental shelf wave range between 3–10 days and corresponds to the passage of synoptic systems from west to east across the west Australian coastline. Tropical cyclones are intense low pressure systems which form over warm ocean waters at low latitudes and are associated with strong winds, torrential rain and storm surges (in coastal areas). They may cause extensive damage as a result of strong winds and flooding (caused by either heavy rainfall and/or coastal storm surges). The impacts of tropical cyclones on the North-West region of Australia are well known with several severe cyclones impacting this region over the past few years. The most noticeable impacts of these cyclones are normally restricted to the region of impact of the cyclone, and hence the direct effect of cyclones on southwestern Australia is rare. Fandry et€al. (1984) identified 1 to 2€m amplitude peaks in sea level propagating southwards with speeds ranging between 400–600€km€day−1. These were associated with tropical cyclones travelling southward and were attributed to a resonance phenomenon when speeds of the southward component of the cyclone speeds were close to the southward propagating continental shelf wave. Sea level records at Fremantle indicate remote forcing due to tropical cyclones. Comparison between the low frequency component of sea level records along the west and south coasts of Western Australia with the occurrence of tropical cyclones in the North-West shelf region has revealed that every tropical cyclone, irrespective of its severity and path, generated a southward propagating sea level signal or a continental shelf wave (Eliot and Pattiaratchi 2010). The wave can be identified in the coastal sea level records, initially as a decrease in water level, 1–2 days after the passage of the cyclone and has a period of about 10 days. As an example, water level record at Fremantle for the period 1–19 December 1995 is shown on Fig.€7.8. Tropical cyclone Frank was declared as a category 1 cyclone on 7 December and
Fig. 7.8↜渀 Sea level record at Fremantle (↜thin black line) during December 1995 showing the lowfrequency water level variation (↜thick-line) induced by Tropical Cyclone Frank
7â•… Coastal Tide Gauge Observations
197
developed into a category 4 cyclone by 11 December and crossed the coastline near Carnarvon on 12 December. The evidence of the continental shelf wave becomes evident on 8 December when the water level starts to decrease and reaches a minimum level on 10 December and a maximum peak on 14 December. The wave height (trough to crest) was 0.55€m, higher than the tidal range during this time (Fig.€7.8).
7.7â•…Seasonal Changes Mean sea level varies in an annual cycle averaging 0.22€m with water levels reaching a maximum in May–June and minimum October–November (Fig.€ 7.9). This variation is attributed to changes in the strength of the major ocean current in the region, the Leeuwin Current (Thompson 1984; Pattiaratchi and Buchan 1991; Feng et€al. 2004). The Leeuwin Current is a shallow (<300€m), narrow (<100€km wide) poleward boundary current flowing off the West Australian coast. It transports relatively warm, lower salinity water of tropical origin southward generally along the 200€m depth contour (Pattiaratchi and Woo 2009). During October to March the Current is weaker as it flows against the maximum southerly winds, whereas between April and August the Current is stronger as the southerly winds are weaker (Godfrey and Ridgway 1985). The Leeuwin Current is driven by the large-scale density field in the eastern Indian Ocean and is in geostrophic balance (Woo and Pattiaratchi 2008) and hence, along the Western Australian coast, a southward flow generates onshore motion. This onshore motion, which is dependent on the strength of the current, creates a set-up of the water level at the coast. This channels the flow along the shelf edge, with a sea surface gradient balancing the tendency for shoreward motion. Thus sea level is higher when the Leeuwin Current is stronger (April to August due to lower southerly wind stress) and lower between October and January when the Current is weaker (higher southerly wind stress).
Fig. 7.9↜渀 Mean monthly sea levels at Fremantle for the period 1943–1988
198
C. Pattiaratchi
Fig. 7.10↜渀 Time series of mean annual sea levels at Fremantle for the period 1960–1990
7.8â•…Inter-Annual Changes Inter-annual changes in sea level, with amplitudes up to 20€cm (Fig.€7.10), are also linked to the strength of the Leeuwin Current (Sect.€7.7). During La Nina events the Leeuwin current is stronger (higher sea level) whilst during El Nino events the Current is weaker (lower sea level). This also implies a strong correlation between mean sea level and the Southern Oscillation Index (SOI), an index reflecting El Nino/La Nina events (Pattiaratchi and Buchan 1991; Feng et€al. 2004). Annual and inter-annual variability is mainly due to changes in volume transport of oceanic current systems (the Leeuwin Current) and to the El Nino Southern Oscillation (ENSO). The relationship between the annual mean sea level and the Southern Oscillation Index (SOI), a measure of ENSO, varies over time. From 1989 to 1998, the sea level and SOI signals were virtually identical in relative amplitude and phase, with a 1 unit change in SOI representing a 13€mm change in mean sea level. The relationship is less clear during the period 1920–1940 which exhibit a poor correlation between SOI and mean annual sea level. This period corresponds to a period where the SOI was almost invariant and but experienced the highest changes in mean water level over the past 100 years. This indicates processes other than the SOI signal is contributing to the variability in mean sea level.
7.9â•…Decadal Variations Due to Tides Tides are modulated by variations in the amplitude of the diurnal or semi-diurnal tide, associated with longer-period relative motions of the earth, moon and sun (Pugh 1987). The effects of long-term tidal modulation have been identified from different regions with the two main signals being the 18.61-year lunar nodical cycle and the 8.85-year cycle of lunar perigee (Boon 2004; Shaw and Tsimplis 2010). Although there are fluctuations in gravitational potential associated with these motions, the direct tidal response to forcing at these time scales is theoretically small and is of the order of 4% for the semi-diurnal tide (Pugh 1987). Higher tidal modulation at the
7â•… Coastal Tide Gauge Observations
199
Fig. 7.11↜渀 The 99% (↜grey) and 95% storm surge exceedance curves showing the 18.61 nodal cycle. (Modified from Eliot (2010))
18.6-year cycle has been identified in diurnal regions, ranging between ±15% and ±20% of the tidal constituent (Pugh 1987). Thus at Fremantle, located in a diurnal tidal regime, the influence of the 18.6-year cycle could be an important component of the modulation of the tide over this time scale. The 18.61-year lunar nodical cycle arises from the variation in the moon’s orbit. The moon, in making a revolution around the earth once each month, passes from a position of maximum angular distance north (23.5°â•›±â•›5°) of the equator to a position of maximum angular distance south (23.5°â•›±â•›5°) of the equator during each half month. This is termed a tropical month and has a period of 27.32 days (Pugh 1987). This angular distance is defined as the lunar declination and twice a month the moon crosses the equator. The cycle of variation from 18.5° (23.5°â•›−â•›5°) to 28.5° (23.5°â•›+â•›5°) is defined as the nodal cycle and has a period of 18.61 years. This cycle modulates the tide generating forces and in particular influences the diurnal tides. Analysis of the tidal record from Fremantle indicates that the lunar nodal cycle has a range ~15€cm in the region (Fig.€7.11) which comparable to the a number of other processes discussed in this paper (Table€7.1) and is ~25% of the mean tidal range. Thus it forms a significant component of the sea level variability in decadal terms. The cycle most recently peaked in 2007 (Fig.€7.11) and thus the region will experience a decreasing effect from this process until 2016–2017. The increase in the magnitude of the nodal tides in the region has been attributed to the dominant diurnal tides in the region (Eliot 2010). These decadal changes in tidal modulations have a significant influence on coastal flooding and management.
7.10╅Global Mean Sea Level Processes Relevant global sea level processes can be considered from two time-scales: (1) inferred from geological evidence, particularly over the last 20,000 years; and, (2) the historic record, largely determined from coastal tide gauge measurements. Sea level rise in the west Australian region over recent geological time frames has been inferred from geological records (Wyrwoll et€al. 1995). This behaviour
200
5 Present Sea Level
0
+ + +++
++ ++ ++ + ++ + + + + +
+
–9000 –8000 –7000
–5 –10 –15 + Huon coral reef data Morley core data Suomi core data Swan River data Abrolhos outcrop data
–6000 –5000 Years (BP)
Depth (m)
Fig. 7.12↜渀 Sea level event data for Western Australia. (From Wyrwoll et€al. 1995)
C. Pattiaratchi
–20 –25
–30 –4000 –3000
largely corresponds to global analysis of sea level records with rapid sea level rise subsequent to the last Ice Age, reaching present levels approximately 6,000 years before present, then subsequently staying largely constant with the mean sea level has increasing more than 120€m since the last glacial maximum (Fig.€7.12). As a result of global warning due to the enhanced greenhouse effect the mean sea level has been increasing over the past few decades. For example, the mean global sea level rise over the twentieth century is recorded to be 1.1–1.9€mm€year−1 whilst the rate of increase since 1993 is of the order of 3€mm€year−1 (Church et€al. 2004). Majority of this increase has resulted due to global warning with a contribution from melting glaciers. Sea level has been recorded at Fremantle continuously since 1897 and is the longest sea level data record in the southern hemisphere. This record indicates that there has been a mean rate of sea level rise of 1.54€mm per annum (Fig.€7.13). This rate of increase is similar to that observed globally, which has been estimated to range between 1.1–1.9€ mm per annum (Douglas 2001; Church et€ al. 2004). Al-
Fig. 7.13↜渀 Time series of Fremantle sea level (one year running mean) with the linear trend of 1.54€mm per annum shown with dashed line
7â•… Coastal Tide Gauge Observations
201
though there has been an increasing trend over the past 100 years, there have been periods, which are revealed when the linear trend is removed, where the rate of mean sea level change varied with time. These variations were dominated by the inter-annual variability of sea level linked to the ENSO phenomenon. From 1900 to 1952 there were cyclic periods of sea level increase and decrease ranging between 10–14 years. Between 1952 and 1991, there was a decreasing trend, but in combination with the mean sea level rise resulted in almost constant mean sea level. A reversal of this trend occurred between 1991 and 2004, producing an apparent rapid mean sea level rise at a rate of 5€mm per annum—a rate more than 3 times the trend over the previous 100 years (Pattiaratchi and Eliot 2005). This resulted in Fremantle recording maximum sea levels in 2003 and 2004. Acknowledgements╇ The authors acknowledge the contributions from Mathew Eliot and Ivan Haigh to this contribution; Tony Lamberto and Reena Lowry from Department for Transport (WA), for the provision of water level data.
References Allan J, Komar P, Priest G (2003) Shoreline variability on the high-energy Oregon coast and its usefulness in erosion-hazard assessments. J Coast Res 38:83–105 Bode L, Hardy TA (1997) Progress and recent developments in storm surge modelling. J Hydraul Eng, ASCE 123:315–331 Boon JD (2004) Secrets of the tide: tide and tidal current analysis and applications, storm surges and sea level trends. Horwood, Cambridge, p€212 Church JA, White NJ, Coleman R, Lambeck K, Mitrovica JX (2004) Estimates of the regional distribution of sea level rise over the 1950–2000 period. J Clim 17(13):2609–2625 Douglas BC (2001) Sea level change in the era of the recording tide gauge. In: Douglas BC, Kearney MS, Leatherman SP (eds) Sea level rise: history and consequences. International geophysics series, vol€75. Academic Press, San Diego, pp€37–64 Easton AK (1970) The tides of the continent of Australia. Horace Lamb Centre for Oceanographical Research (Flinders University of South Australia) Research Paper No. 37 Eliot M (2010) Influence of inter-annual tidal modulation on coastal flooding along the Western Australian coast. J Geophys Res Oceans 115(C11013):11. doi:10.1029/2010JC006306 Eliot I, Clarke D (1986) Minor storm impact on the beachface of a sheltered sandy beach. Mar Geol 79:1–22 Eliot MJ, Pattiaratchi CB (2010) Remote forcing of water levels by tropical cyclones in south-west Australia. Continental Shelf Res 30:1549–1561 Fandry CB, Leslie LM, Steedman RK (1984) Kelvin-type coastal surges generated by tropical cyclones. J Physical Oceanogr 14:582–593 Feng M, Li Y, Meyers G (2004) Multidecadal variations of Fremantle sea level: footprint of climate variability in the tropical Pacific. Geophys Res Lett 31: L16302. doi:10.1029/2004GL019947 Gentilli J (1972) Australian climate patterns. Thomas Nelson, Melbourne Godfrey JS, Ridgway KR (1985) The large-scale environment of the poleward- flowing Leeuwin current, Western Australia: longshore steric height gradients, wind stresses and geostrophic flow. J Phys Oceanogr 15:481–495 Hamon BV (1966) Continental shelf waves and the effects of atmospheric pressure and wind stress on sea level. J Geophys Res 71:2883–2893 Huyer A (1990) Shelf circulation. In: Le Méhauté B, Hanes DM (eds) The sea: ocean engineering science.9A. Wiley, New York, pp€423–466
202
C. Pattiaratchi
Ilich K (2006) Origin of continental shelf seiches, Fremantle, Western Australia. Honours thesis. School of environmental systems engineering, the university of Western Australia Komar PD, Enfield DB (1987) Short-term sea-level changes and coastal erosion. In: Nummedal D, Pilkey OH, Howard JD (eds) Sea-level fluctuation and coastal evolution: Society of economic paleontologists and mineralogists, Special Publication 41, p€17–27 Le Blond PH, Mysak LA (1978) Waves in the ocean. Oceanography series, vol€20. Elsevier Science, New York Lemm A, Hegge BJ, Masselink G (1999) Offshore wave climate, Perth, Western Australia. Mar Freshw Res 50(2):95–102 Masselink G, Pattiaratchi CB (2001) Characteristics of the sea breeze system in Perth, Western Australia, and its effects on the nearshore wave climate. J Coastal Res 17:173–187 Miles J (1974) Harbour seiching. Annu Rev Fluid Mech 6:17–33 O’Callaghan J, Pattiaratchi CB, Hamilton D (2007) The response of circulation and salinity in a micro-tidal estuary to sub-tidal oscillations in coastal sea surface elevation. Continental Shelf Res 27:1947–1965 O’Callaghan J, Pattiaratchi CB, Hamilton D (2010) The role of intratidal oscillations in sediment resuspension in a diurnal, partially mixed estuary. J Geophy Res Oceans 115:C07018. doi:10.1029/2009JC005760 Pariwono JI, Bye JAT, Lennon GW (1986) Long-period variations of sea-level in Australasia. Geophys J Int 87:43–54 Pattiaratchi CB, Buchan SJ (1991) Implications of long-term climate change for the Leeuwin current. J R S West Aust 74:133–140 Pattiaratchi CB, Hegge B, Gould J, Eliot I (1997) Impact of sea-breeze activity on nearshore and foreshore processes in southwestern Australia. Continental Shelf Res 17:1539–1560 Pattiaratchi CB, Eliot M (2005) How our regional sea level has changed. Climate note 9/05 (August). Indian Ocean Climate Initiative. http://www.ioci.org.au/publications/pdf/IOCIclimate notes_9.pdf Pattiaratchi CB, Wijeratne EMS (2009) Tide gauge observations of the 2004–2007 Indian Ocean tsunamis from Sri Lanka and western Australia. Pure Appl Geophys (in press) Pattiaratchi CB, Woo M (2009) The mean state of the Leeuwin current system between North West Cape and Cape Leeuwin. J R S West Aust 92:221–241 Provis DG, Radok R (1979) Sea-level oscillations along the Australian coast. Aust J Mar Freshw Res 30:295–301 Pugh DT (1987) Tides, surges and mean sea-level. Wiley, UK Pugh DT (2004). Changing sea levels: effects of tides, weather, and climate. Cambridge University Press, Cambridge Ranasinghe R, Pattiaratchi CB (2000) Tidal inlet velocity asymmetry in diurnal regimes. Cont Shelf Res 20:2347–2366 Reid R (1990) Tides and storm surges. In Herbich J (ed) Handbook of coastal and ocean engineering: wave phenomena and coastal structures. Gulf Publishing Company, USA, pp€533–590 Shaw AGP, Tsimplis MN (2010) The 18.6€yr nodal modulation in the tides of Southern European Coasts. Continental Shelf Res 30:138–151 Tang YM, Grimshaw R (1995) A modal analysis of coastally trapped waves generated by tropical cyclones. J Phys Oceanogr 25:1577–1598 Thompson RORY (1984) Observations of the Leeuwin current off Western Australia. J Phys Oceanogr 14:623–628 Woo M, Pattiaratchi CB (2008) Hydrography and water masses off the Western Australian coast. Deep-Sea Research Part I: Oceanographic Research Papers, 55, 1090–1104 Wyrwoll KH, Zhu ZR, Kendrick GA, Collins LB, Eisenhauser A (1995) Holocene sea-level events in Western Australia: revisiting old questions. In: Finkl CW (ed) Holocene cycles: climate, sea level, and coastal sedimentation. J Coastal Res, special issue no. 17. Coastal Education and Research Foundation, pp€321–326
Chapter 8
Surface Waves Diana Greenslade and Hendrik Tolman
Abstract╇ In this chapter, we first present the governing equations for linear wave theory. This provides a simple but yet powerful description of the wind-driven waves on the ocean surface. A number of important concepts are derived, including the dispersion relation. From the dispersion relation we examine some differences between waves in deep water and waves in shallow water and in particular we demonstrate that deep water waves are dispersive, while shallow water waves are non-dispersive. The wave spectrum is introduced as a convenient way to characterise the distribution of energy in the wave field and Significant Wave Height is defined. In the last section, we provide an overview of the fundamentals behind modern “third-generation” wave models. The balance equation for the wave energy spectrum is presented and the source terms are discussed. Various issues associated with operational wave forecasting systems are presented and some results of an ongoing wave forecast intercomparison project are presented. Finally, some indications of future directions for wave model research and operational systems are identified.
8.1â•…Introduction The dominant features that we see when we look at the ocean are the wind-driven surface waves. At first sight, on a windy day, such as that shown in Fig. 8.1, the waves can look very complex, with a multitude of different scales in existence, wave breaking, sea-spray flying from the surface etc. It seems that we have a challenge before us to be able to describe the surface analytically, so that we can model and forecast the waves with any accuracy.
D. Greenslade () Centre for Australian Weather and Climate Research, Bureau of Meteorology, Melbourne, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_8, ©Â€Springer Science+Business Media B.V. 2011
203
204
D. Greenslade and H. Tolman
Fig. 8.1↜渀 A typical ocean surface. (Photo courtesy Eric Schulz, Bureau of Meteorology)
However, with a number of simplifying assumptions, linear wave theory can provide a simple but yet powerful description of the wind-driven waves on the ocean surface. Although many of the assumptions may seem overly simplified at first, it will be seen that in general, linear wave theory can be used to describe many of the dominant features that we see in the wind waves. Further to this, moving forward to a statistical description of the ocean surface, we will see that modern wave models can provide very good forecasts of the sea-state, assuming that they are driven with reasonable estimates of the surface winds. In this chapter, we first present the governing equations for the linear wave theory, and derive a few important concepts, the most important of these being the dispersion relation. From the dispersion relation we will be able to derive several interesting features of surface waves. Next, after the presentation of some basic definitions, we will provide an overview of the fundamentals behind modern “thirdgeneration” wave models, which are currently implemented in a number of operational forecasting centres around the world.
8.2â•…Governing Equations The treatment presented here is fairly standard and can be found with varying degrees of detail in books such as Young (1999), Holthuijsen (2007), and Kundu (1990).
8â•… Surface Waves
205
Fig. 8.2↜渀 Framework for linear wave theory ]
`D
η [W [
] ±+
In order to apply linear wave theory to the problem of surface gravity waves, we need to make a number of assumptions. These are (1) the amplitude of the waves is small compared to the wavelength and the depth of the water, (2) the depth of the water does not vary, (3) the waves are high frequency compared to the Coriolis frequency—this means we can ignore the rotation of the earth, (4) we also neglect surface tension—this means we are considering waves that are longer than about 5€cm, (5) the water is incompressible, (6) the water is of constant density, and (7) the motion is irrotational (and thus, viscosity can be ignored). The description of the motion as “irrotational” can cause some confusion. As can be shown, the resulting velocities in this case are, in a sense, rotational, in that the water particles move in a circular pattern as the wave propagates. However, the particles do not rotate on their own axes and the resulting motion does not include shearing of the fluid, and so in a mathematical sense the motion is irrotational. Another way of saying this is that there is no vorticity. We do not consider here the impact of the wind on the ocean surface, or the interaction of the waves with the bottom through shear stresses. Further, we will simplify the problem by only considering waves propagating in one direction (the x-direction). The free surface is described by η(x, t) and the depth of the water is H (see Fig.€8.2) To begin with, since the motion is irrotational, we can define a velocity potential φ(x, z, t) such that ∂φ ∂φ (8.1) and w(x, z, t) = ∂x ∂z The governing equations are the conservation of mass and the conservation of momentum. The conservation of mass is governed by the continuity equation:
u(x, z, t) =
∂u ∂w + =0 ∂x ∂z
(8.2)
∂ 2φ ∂ 2φ + 2 =0 2 ∂x ∂z
(8.3)
which becomes
upon substitution of the velocity potential. This is known as the Laplace equation.
206
D. Greenslade and H. Tolman
Given that the flow is irrotational and inviscid, the conservation of momentum is governed by the Bernoulli equation for unsteady flow:
p ∂φ 1 2 + u + w2 + + gz = 0 ∂t 2 ρ
(8.4)
For small amplitude waves, the velocity terms, u and w will also be small, so we may neglect the squares of these terms. Thus: ∂φ p + + gz = 0 ∂t ρ
(8.5)
So we have two governing equations—the Laplace equation and the Bernoulli equation and we need to solve these under the constraints of specific boundary conditions. There are three relevant boundary conditions here. Firstly, we have the kinematic boundary condition of no normal flow at the bottom, i.e.
w=
∂φ = 0 at z = −H ∂z
(8.6)
Secondly, we have the kinematic boundary condition at the surface. This dictates that fluid may not leave the surface, or in other words, the vertical velocity of the fluid is the same as the total velocity of the surface. Examination of the ocean surface depicted in Fig.€8.1 suggests that this is often not satisfied in reality. It is often possible to see sea spray departing the surface and being thrown into the air. However, we are dealing here with a simplified situation with no wind and no wave breaking. The boundary condition is:
w=
∂φ Dη ∂η ∂η = = +u at z = η ∂z Dt ∂t ∂x
(8.7)
Again, we note that u(∂η/∂x) is the product of two small terms so it may be neglected. Further, it can be shown (e.g. Kundu 1990) through a Taylor expansion of ∂φ/∂z that the boundary condition can be evaluated at zâ•›=â•›0 instead of z = η. Thus the kinematic boundary condition at the free surface becomes:
∂φ ∂η = at z = 0 ∂z ∂t
(8.8)
Our third boundary condition is the dynamic boundary condition at the free surface. This says that the pressure just below the surface is equal to the ambient, or atmospheric pressure. We may set this equal to anything we like, so we will set it to be equal to zero, thus:
p = 0 at z = η
(8.9)
8â•… Surface Waves
207
The Bernoulli equation therefore becomes
∂φ + gη = 0 at z = η ∂t
(8.10)
As before, a Taylor expansion allows us to evaluate ∂φ/∂t at zâ•›=â•›0 instead of at z = η so we have ∂φ = −gη at z = 0 ∂t
(8.11)
Thus the problem can be stated as follows: we need to solve ∂ 2φ ∂ 2φ + =0 ∂x 2 ∂z2
(8.12)
∂η ∂φ = at z = 0 ∂z ∂t ∂φ = 0 at z = −H ∂z ∂φ = −gη at z = 0 ∂t
(8.13)
subject to
This is fairly straightforward to solve and a detailed solution can be found in Kundu (1990). It involves assuming a sinusoidal form for η(x, t) and using separation of variables. The solution is found to be
φ(x, z, t) =
aω cosh k(z + H ) sin (kx − ωt) k sinh kH
(8.14)
with
η(x, t) = a cos (kx − ωt)
(8.15)
where a is a constant (the wave amplitude), ω is the wave frequency ( ω = 2πf = (2π/T ) 2πâ•›/â•›T where T is the wave period) and k is the wave number (k€=€2πâ•›/â•›λ where λ is the wavelength). From this, the velocity components u and w can readily be found using Eq.€(8.1).
8.3â•…Dispersion Relation Substitution of the solutions for φ(x, z, t) and η(x, t) into the Bernoulli equation (Eq. 8.11) will give the following relationship between wave frequency and wavenumber:
208
D. Greenslade and H. Tolman
(8.16)
ω2 = gk tanh kH
This is the dispersion relation (so-called for reasons which will become apparent later). A number of useful properties of the motion can now be derived. We will firstly examine the differences between waves in deep water and waves in shallow water. Waves in deep water are defined to be those for which the depth of the water is large compared to the wavelength of the wave. We may consider the wavelength to be the inverse of the wave number (dropping the factor of 2π since we are dealing with orders of magnitude), thus deep water is defined as
kH 1
(8.17)
ω2 = gk
(8.18)
We now consider what happens to the dispersion relation in this case. Looking at Fig.€8.3, we see that for large x, tanh x asymptotes towards 1 so this means that for large kH, tanh kH approaches 1 and the dispersion relation reduces to
On the other hand, shallow water waves are those for which the wavelength is long compared to the water depth, i.e. λ H and thus
(8.19)
kH 1
Again, we consider the behaviour of tanh x, this time for small x and see that it approaches the line yâ•›=â•›x. So for small kH, tanh kH approaches kH and the dispersion relation reduces to
(8.20)
ω2 = gH k 2
3 2 1 0 –1 –2
Fig. 8.3↜渀 The graph of y╛=╛tanh x
–3 –3
–2
–1
0
1
2
3
8â•… Surface Waves
209
One interesting question to ask is “How deep is deep water?” or “How shallow is shallow water?”. Consider the approximation that we have made for deep water, i.e.
tanh kH ≈ 1
(8.21)
The point at which deep water becomes “deep” is the point at which we claim that this approximation is true, so it really depends on how far along the asymptote you want to go. A glance at the plot of yâ•›=â•›tanh x shows that tanh is already quite close to 1 at a value of xâ•›=â•›2, and in fact tanh (2.0)â•›=â•›0.96…. This factor of 4% could well be “small enough”, so if we take this to be the point beyond which the approximation holds, then deep water can be defined as that for which kHâ•›>â•›2. This means that
H>
λ π
(8.22)
or the water depth needs to be greater than about a third of the wavelength for the deep water approximation to apply. Typical swell waves in the ocean with periods of about 8€sec, have wavelengths of about 100€m, so these will be considered deep water waves right up until a depth of about 30€m, i.e. the waves will only start to feel the bottom when they are in water of less than 30€m depth. With the typical coarse spatial resolutions of global wave forecasting models (see Sect.€8.5) there are very few grid points that are in depths of 30€m or less, so it is often a reasonable approach to run global systems with deep water physics only. Now considering the shallow water approximation, we see that yâ•›=â•›tanh x is very close to the yâ•›=â•›x line for values of x less than about 0.5, in fact tanh (0.45) ≈ 0.422. Again, if we think that this is a tolerable approximation, then we can say that our shallow water approximation holds when kH < 0.45, or
H < 0.07λ
(8.23)
In other words, for the waves to behave as purely shallow water waves, the water depth needs to be less than 7% of the wavelength. Our 100€m long swell waves will thus only become purely shallow water waves when the water depth is less than 7€m. On top of this, wavelengths become shorter in shallow water, moving the shallow water limit for swell with deep water wavelengths of 100€m to even shallower water. An important point to note here is that the definitions for “deep water” and “shallow water” are actually defined as relationships between the wave and the water depth, rather than as an absolute value of the water depth, so there is no specific depth at which the water can be called either “deep” or “shallow”. For example, the wavelength of a tsunami is related to the width of the rupture of the earthquake that generated it. This is typically of order 100€km wide. Therefore, tsunamis will act as shallow water waves when the water depth is less than 7% of 100€km which is 7000€m. Almost all of the global ocean is shallower than this, so this is why tsunamis are considered to be shallow water waves.
210
D. Greenslade and H. Tolman
8.3.1 Phase Velocity and Group Velocity Some interesting features of wave propagation can be easily derived from the deep and shallow water approximations to the dispersion relation. The phase speed of a wave is simply the speed of propagation of the wave crest. The definition of the period (↜T) of a wave is the time taken for successive crests of a wave to pass a fixed point, thus a wave will move a distance in time T and so the phase speed (↜cp) is
cp =
λ ω = T k
(8.24)
For a disturbance represented by a number of different sinusoidal waves, the group velocity describes the velocity at which the energy of the group of waves is propagating. This can be shown to be (e.g. Holthuijsen 2007; Young 1999)
cg =
dω dk
(8.25)
So we see that in deep water, from Eq.€(8.18):
cp =
g 1 and cg = k 2
g k
(8.26)
while in shallow water (Eq.€8.20):
cp =
gH and cg = gH
(8.27)
Equation€(8.26) says that in deep water, the individual waves are propagating at twice the speed of the energy that they are carrying. This is an intriguing concept and it can be seen quite easily in nature. If you throw a small stone in a puddle, providing the puddle is deep enough, you will see a group of ripples propagating outwards obeying the deep water dispersion relation. As the ripples propagate away from the disturbance, you will see that individual waves appear at the back of the group, move forwards through the group and then disappear as they get to the front of the group. Equation€(8.26) also shows that the speed of propagation of the waves is related to the wavenumber, so waves of different wavelengths will propagate at different speeds. For a disturbance composed of waves of a number of different frequencies (or wavelengths), as they propagate away from the area of disturbance the longer waves will travel faster than the shorter waves and thus the wave energy will disperse. This is where the term dispersion relation comes from. Equation€(8.27) says that in shallow water, the individual waves propagate at the same speed as the wave energy and this speed is dependent only on the water depth. Thus waves of all wavelengths will travel at the same speed and shallow water waves are therefore non-dispersive.
8â•… Surface Waves
211
In addition to these interesting features of wave propagation, further useful properties of the motion can be derived from Eq.€(8.14). For example, it can be shown that the trajectories of the fluid particles (defined by u and w) describe circles in deep water and ellipses in shallow water. These are often referred to as the orbital velocities of the waves. The derivation is not shown here, but details can be found in Young (1999), Holthuijsen (2007) or Kundu (1990).
8.4â•…Basic Definitions The analysis above is mainly concerned with the very simple situation where we consider just one sinusoidal wave component. We have seen that it is possible to derive some readily seen characteristics of the ocean surface with the various assumptions, however, it is clear that this is not a valid description of the actual ocean surface. A more appropriate description is that the sea-surface is characterised as the superposition of a large number of sinusoidal components, with each of these sinusoidal components behaving as described in the previous section. Figure 8.4 shows an example with five sinusoidal components. Each of these components has a different frequency and a different amplitude and they sum together to produce the more complex sea-surface elevation depicted at the bottom. This is again just in one dimension but it can easily be extended to two dimensions by considering a range of different wave directions as well. Thus, the sea-surface elevation in general can be described by
η(t) =
N i=1
(8.28)
ai sin (ωi t + φi )
where ai , ωi and φi represent the amplitude, frequency and phase of the ith wave component, respectively.
+ + + +
=
Fig. 8.4↜渀 Representation of a 1-D ocean surface as a sum of 5 sinusoidal components
212
D. Greenslade and H. Tolman
8.4.1 The Wave Spectrum Consider the variance of the sea-surface elevation. This is, by definition, the mean of the square of the surface elevation, and so, assuming the mean of η is zero:
variance = σ 2 =
N 1 2 a 2N i=1 i
(8.29)
We can also consider how this variance is distributed over the different frequencies present in the wave fields, i.e., over the frequency interval fi . This gives us the variance density spectrum:
ai2 2fi
(8.30)
ai2 f →0 2f
(8.31)
F (fi ) =
which becomes, in the limit F (f ) = lim
or
2
σ =
∞
F (f ) df
(8.32)
0
This is the frequency spectrum. It can be generalised to the directional case as
2
σ =
2π ∞ 0
F (f , θ) df dθ
(8.33)
0
So to summarise, the directional frequency spectrum F (f , θ ) can be used to describe the variability of the sea-surface elevation. Note that there is no phase information in this description, so the actual surface elevation as depicted in Fig.€8.4 could not be reconstructed from the spectrum, but instead, it describes the distribution of the energy in the wave field according to wave frequency and direction. The wave spectrum is a very useful construct and is the prognostic variable for current state-of-the-art wave models. A couple of examples of directional wave spectra are shown in Fig.€8.5. The top panel of this figure shows both a full directional wave spectrum and its directionally integrated one-dimensional equivalent. This depicts a relatively simple sea-state in which most of the wave energy is propagating towards the west, with a fairly large spread around this direction. The peak energy occurs at a frequency of around 0.15€Hz, i.e. most of the energy is being carried by waves with period of about 6.7€sec (this is the peak period, Tp). For the spectrum shown in the bottom
8â•… Surface Waves
213 1.2
Spectral Density (m2 sec)
1.0
N
W
0.8
0.6
0.4
E
0.05 0.1
0.2
0.2 0.3
0.0 0.0
0.4 S
0.1
0.2 0.3 0.4 Frequency (Hz)
N
N-W
N-E
0.4
W
E
S-W
S-E
S
Fig. 8.5↜渀 Examples of directional wave spectra
0.5
214
D. Greenslade and H. Tolman
panel, there are a number of different components to the sea state, with wave energy clearly propagating in a number of different directions. You can imagine that the sea-state described by this wave spectrum would look quite complicated and very different to the wave field represented by the spectrum in the top panel.
8.4.2 Significant Wave Height Significant Wave Height (↜Hs) is another very important concept that is used frequently to describe the sea state. The idea of wave height for a simple sinusoidal wave is trivial—the wave height is defined to be twice the amplitude, so for each of the 5 wave components depicted in Fig.€8.4, it is straightforward to determine the wave height. But what is the wave height of the resulting wave field? Hs has come to be used to describe a number of different “wave heights” that can be derived from a wave field. These are all typically very close in value, but given their different methods of derivation, there are some subtle differences of which it is important to be aware. The original definition is that based on visual observations. Someone out on a boat in the open ocean can observe the waves and estimate what the “average” wave height is. Clearly this will be a subjective estimate and different observers may well produce different wave height estimates. This is called the Significant Wave Height. A second definition is that obtained through direct observations of the sea-surface elevation. In this case, the Significant Wave Height is defined to be the average of the one-third highest waves in a sample, where a “wave” is defined through the upward or downward crossing definition (see, for example Holthuijsen (2007) for definitions of these). In this case, the resulting wave height should more accurately be referred to as H1/3, but Significant Wave Height is more often used. It has been shown that the visually observed wave height is closely correlated to this definition of wave height (Jardine 1979). It implies that an observer only sees the higher waves, and automatically ignores smaller waves riding on the dominant waves. Hs can also be derived from the wave spectrum. Using the definition that it is the mean value of the highest one-third of the waves in a given record, and assuming that the wave heights (or more specifically the crest heights) are Rayleigh-distributed, then H1/3 can be shown to be equal to (Holthuijsen 2007): √ 4.004 . . . m0 (8.34) where m0 is the zeroth-order moment of the wave spectrum given by
m0 =
2π ∞ 0
F (f , θ) df dθ
(8.35)
0
This is equivalent to the volume enclosed by the two-dimensional spectrum (the one-dimensional version would be the area under the curve of the one-dimensional
8â•… Surface Waves
215
spectrum). The value of 4.004… is typically rounded to 4 and so the spectrallyderived definition of H1/3, which more formally should be referred to as Hm0 can be written as 2π ∞ (8.36) Hm0 = 4 F (f , θ) df dθ 0
0
Again, this is almost always referred to as Hs. In order to determine this from a modelled wave spectrum, the integral needs to be expressed as a sum over the discrete frequency and directional range of the modelled spectrum. Given that the model has a limited range of frequencies that it can resolve, a high-frequency tail is usually included, with a slope of f −n, where n is usually 4 or 5, so it is straightforward to determine the area under this part of the spectrum and it can be added to the Hs. (See the one-dimensional spectrum in Fig.€8.5—the spectral values stop abruptly at the highest frequency that the model is able to resolve). The Significant Wave Height is a statistical measure for the wave height. Clearly, individual waves can be both lower and higher. It can be shown that in a simple spectrum describing a single coherent wave system, the probability distribution of the height of individual waves closely follows the Rayleigh distribution (e.g., Holthuijsen 2007). This distribution implies that 1 in 100 waves is expected to be as large as 1.51Hm0 , and 1 in 1000 waves is expected to be as large as 1.86Hm0 . Higher waves rapidly become less likely, which is why waves higher than approximately 2.0Hm0 are typically called “freak” or “rogue” waves. We have seen here that there are a number of different ways of describing the “wave height” of a particular wave field and these are typically all referred to as Significant Wave Height, or Hs. Clearly, this one value used for describing the seastate is a gross simplification. It would be reasonable to use this to describe a simple sea-state in which there is only one dominant component to the wave field, but consider the two sea-states in Fig.€8.5. The Hs is similar in each panel (↜Hsâ•›=â•›1.36€m in the top panel compared to Hsâ•›=â•›1.03€m in the bottom panel) even though the seastates depicted by the spectra are very different. Simply using Hs to describe a seastate means that you lose a lot of information about the structure of the wave field. This is similar to giving a weather forecast with a simple maximum temperature value. It doesn’t tell you whether you need to take your umbrella or not!
8.5â•…Operational Wave Modelling 8.5.1 Background and Basics This section focuses on operational wave modelling in the context of wave forecasting. As mentioned previously, most current state of the art wave forecast models
216
D. Greenslade and H. Tolman
are phase-averaged third generation models, which have the wave spectrum as their prognostic variable. The most common models in usage at international forecasting centres are WAM (WAMDIG 1988; Komen et€al. 1994) and WAVEWATCH III® (Tolman et€al. 2002, 2009). These are computationally efficient models that can be used for large scale global forecasting. Also the SWAN model (Booij et€al. 1999; Ris et€al. 1999) is extensively used, but more for near-shore engineering applications. A review of the state of the art of operational (and research) wave modelling can be found in Cavaleri et€al. (2007). The basis of virtually all wind wave models used in operational forecasting is some form of the balance equation for the wave energy spectrum F (f , θ ) as discussed in Sect.€8.4.1. In its most simple form, it is given as
∂F + ∇.(cg F ) = Sin + Snl + Sds + Sbot ∂t
(8.37)
where the left hand side represents the effects of linear propagation, and the right hand side represents sources and sinks for spectral wave energy. Propagation, in its simplest form, only considers wave components in the spectrum to propagate along great circles, until the wave energy gets absorbed at the coast (either as part of the propagation algorithm, or due to the dissipation source terms). More advanced versions of this equation, as used in prevalent models, also consider refraction (changing of wave direction due to interaction with the bottom in shallow water) and shoaling (changing of wave height and length due to changing water depths), and some consider similar effects due to the presence of mean currents. So far, all operational wave models consider linear propagation only. Many operational models now address the effects of unresolved islands and reefs as sub-grid obstructions. Traditionally, three source terms have been considered; Sin describing the input of wave energy due to the action of the wind, Snl describing the effects of nonlinear interactions between waves, and Sds describing the loss of wave energy due to wave breaking or “whitecapping”. Many early models for shallow water applications added a wave-bottom interaction source term, Sbot, which was typically concerned with wave energy loss due to friction in the bottom boundary layer. Of these source terms the nonlinear interactions have a special relevance. Effects of nonlinear interactions occur as source terms in this equation, because the propagation description in the equation is strictly linear. Furthermore, the interactions are essential for wave growth, and not for propagation. They represent the lowest order process known to effectively lengthen waves during growth, and they have been shown to stabilize the spectral shape at frequencies higher than the spectral peak (e.g., Komen et€al. 1994). Nonlinear interactions consider resonant exchanges of energy, action and momentum between four interacting wave components, governed by a six-dimensional integration over spectral space. The SWAMP study in the 1980’s (SWAMP group 1985) identified the explicit computations of these interactions as essential for practical wave models. The development of the Discrete Interaction Approximation (DIA) (Hasselmann et€al. 1985) made this economically feasible. Models that explicitly compute nonlinear four-wave interactions are identified as third-generation wave models.
8â•… Surface Waves
217
Present operational wave models address source terms in a much more detailed fashion. Wind input is turning into wind-wave interactions, and can include feedback of energy and momentum to the atmosphere (“negative input”). Further to this, wave breaking is seen as impacting on atmospheric turbulence, and hence influencing atmospheric stresses and wave growth. Nonlinear interactions now regularly include both four-wave interactions in deep water and three-wave (triad) interactions in shallow water. Wave dissipation now regularly addresses traditional whitecapping in the deep ocean, and separate mechanisms for depth-induced (“surf”) breaking, and much slower dissipation mechanisms that influence swell travelling across basins with decay time scales of days to weeks. Many additional wave-bottom interactions are also considered in shallow water. Most prevalent are bottom friction source terms, but other processes such as wave-sediment interactions associated with bottom friction, percolation and scattering of waves due to bottom irregularities have been proposed and are available in some wave models. Of special recent interest is the interaction of waves with muddy bottoms, which both adds a source term and may modify the dispersion relation and hence wave propagation. Source terms for other processes such as wave-ice interactions and effects of rain on waves have been proposed, but are presently not used in any practical wave models.
8.5.2 Operational Centres Many operational weather forecast centres run operational wind wave models. This is not done by accident. During the 1974 Safety of Life at Sea (SOLAS) conference, international agreement was reached to consider wind waves as part of the weather, explicitly giving weather forecast centres the responsibility to do wave forecasting for the public. The first numerical wave predictions, however, far precede this date, and in the U.S.A. can be traced back to 1956 (see historical overview in Tolman et€al. 2002). Many of the larger weather forecast centres such as the European Centre for Medium Range Weather Forecasts (ECMWF1, Europe), The National Centers for Environmental Prediction (NCEP2, USA) and the Bureau of Meteorology (Bureau3, Australia) produce wave forecasts for up to 10 days ahead, on 6–12€ h forecast cycles. Most of these centres use a global wave model, with one or more higherresolution nested regional models for areas of special interest. For example, the configuration of WAM at the Bureau (as at end of 2009) is shown in Fig.€8.6. The highest resolution model (blue boundary) is run at 0.125° resolution in latitude and longitude, and is nested inside a model at 0.5° spatial resolution (red boundary) which is in turn nested inside the global model at 1°. Typically, the higher resoWeb site at http://www.ecmwf.int. Wave data at http://polar.ncep.noaa.gov/waves. 3╇ Wave data at http://www.bom.gov.au/marine/waves.shtml. 1╇ 2╇
218
D. Greenslade and H. Tolman 20 E
40 E
60 E
80 E
100 E
120 E
140 E
160 E
160 W
140 W
120 W
100 W
80 W
60 W
40 W
20 W
70 N
70 N
50 N
50 N
30 N
30 N
10 N
10 N
10 S
10 S
30 S
30 S
50 S
50 S
70 S
70 S
20 E
40 E
60 E
80 E
100 E
120 E
140 E
160 E
160 W
140 W
120 W
100 W
80 W
60 W
40 W
20 W
75 N 60 N
45 N 30 N
15 N
EQ 15 S 135 E 150 E 165 E
180 165 W 150 W 135 W 120 W 105 W 90 W 30 x 30 15 x 10 10 x 10 8x4 4x4
75 W 60 W
Fig. 8.6↜渀 Examples of configurations of some operational wave model systems. Top panel shows the Bureau and bottom panel shows NCEP
lution models obtain data from the lower resolution models without feeding any information back, but full two-way nesting of such models is now used at NCEP (Tolman 2008). Configuration of the NCEP system (as at end of 2009) is also shown in Fig.€8.6. This incorporates a range of different spatial resolutions ranging from global at 0.5° down to the highest resolution models at 4 arc minutes (1/15th of a degree) around the coastlines. The spatial resolutions of the wave models are typically dictated by the resolutions of the atmospheric models from which the wave models obtain their wind forcing and additionally, by the availability of computing resources. In an operational forecasting environment, a major consideration is the
8â•… Surface Waves
219
time taken for the model to complete a forecast and the speed with which the results can be disseminated. Some centres also run specialised wave models for specific conditions; for example NCEP run wave models specifically for hurricanes, with specialized forcing from hurricane weather models. Finally, several centres run wind wave ensembles, to provide probabilistic information on the expected reliability of the forecast. While such ensembles have been generated for up to a decade, they have not been scrutinized as much as corresponding atmospheric ensembles, and may not have reached the same level of maturity. Further details of operational wave forecast systems can generally be found at the websites for the forecast centres as given in the footnotes. In addition to differences in the spatial resolutions of the models, there is considerable variety in other aspects of the operational implementations of wave forecast systems at each forecasting centre. For example, the wind forcing used to force the wave model will typically be provided by the centre’s Numerical Weather Prediction (NWP) model, and these can vary considerably in detail. Whether the wave model incorporates data assimilation or not can also contribute to differences in the forecasts. The most widely used data source that is assimilated in wave models is Hs from satellite altimeters. This can significantly improve the skill of wave forecasts (Greenslade and Young 2005), particularly in cases where the surface winds are known to have deficiencies. One limitation to the assimilation of Hs data is that it can not provide any direct information on the observed wave spectrum, so a number of assumptions need to be made in adjusting the modelled spectrum (Greenslade 2001). This issue can be somewhat overcome by incorporating the assimilation of wave spectra from Synthetic Aperture Radar (SAR), such as is performed at the ECMWF (ECMWF 2008). In situ wave buoys could also provide wave spectra for assimilation. However, the limitation of these is that compared to satellite data, they are very sparsely distributed and they tend to be located near the coast, for logistical reasons. The fact that they are typically not used in wave data assimilation schemes means that they can be used as a valuable independent data source for model verification. Many of the operational forecast centres share their model results through a wave model intercomparison study supported by the Joint Commission for Oceanography and Marine Meteorology (JCOMM) (Bidlot et€al. 2007). Model forecasts are also compared to observations from in situ buoys around the globe. This project provides a mechanism for benchmarking and the quality assurance of wave forecast products. The results are available each month to all participants and published on the web.4 An example of the intercomparison at one location is shown in Fig.€8.7. This shows 24-hour forecasts of Hs and Tp at buoy 44005 (located 78 nautical miles off the coast of New Hampshire, in the northwest Atlantic) for the month of November 2009. In the top panel, it can be seen that all wave models are able to forecast the Hs reasonably well, with the synoptic scale variability being captured very well. There is some spread around the observed Hs, and for this example, most of the models have overpredicted the peak Hs occurring around the 15th of November. The Tp is also quite well captured this month, particularly the dominance of long waves (high 4╇
Web site at http://www/jcomm.info
Fig. 8.7↜渀 An example of results from the wave intercomparison activity
220 D. Greenslade and H. Tolman
8â•… Surface Waves
221
wave period) during the middle of the month and the trend towards shorter period waves at the end of the month. The high variability in Tp seen in both the observations and the models from the 3rd to the 13th suggests that there were a number of different wave systems present during this period. There are also a number of summary results from this intercomparison activity produced each month. An example is shown in Fig.€8.8. This shows the root-mean-
Fig. 8.8↜渀 Summary statistics for one month from the wave model intercomparison project. Each coloured line represents forecasts from a different operational centre. Top panel: Hs, middle panel, u10 and bottom panel, Tp. The x-axis in each panel represents forecast period, in days
222
D. Greenslade and H. Tolman
square (rms) error amongst the forecast models averaged over all buoy data available for the three parameters, Hs, Tp and u10 (wind speed at 10€m above the surface). The error is defined as the difference between the modelled and observed parameters. The rms error can be seen as a measure of the skill of a model. Figure€8.8 shows that the rms error for a 24-hour forecast (1 day) is approximately 0.5€m, although it varies from about 0.4€m to about 0.7€m. Errors of wave models normalized with mean conditions for the better models are of the order of 15% for hindcasts and short term forecasts (results not shown). Another feature obvious from this figure is the growth in error with forecast period. It can also be seen that there is a strong correlation between the rms error in the surface winds and the rms error of the wave forecasts, i.e. those centres that have accurate surface winds also have high skill for the wave forecasts. With continuously improving weather models at all centres, differences in wave models and in the selection of numerical and physical options in these models is becoming more and more apparent and important. After a decade of relatively small changes to wind wave modelling approaches, this has recently lead to an increased interest in improved physics approaches in the corresponding wave models.
8.5.3 Outlook As mentioned above, a renewed interest in wave model development has surfaced in the last few years. This is particularly clear with the recently started National Oceanographic Partnership Program (NOPP) project which aims to provide the next generation of source term formulations for operational wind wave models. Literally all source terms in the wave models will be addressed in this study, with a focus on deep water and continental shelf physics. A greater focus of operational centres on coastal wave modelling is emerging, Partly due to increased requirements from users of the service and also due to the increasing ability of wave models to address this, given advances in computing power. With this, alternate modelling approaches such as curvilinear and unstructured grids are becoming more prevalent and more important. Furthermore, the mode of operation of many forecast centres is slowly changing. Traditionally, operational centres have focused on isolated topical forecast problems such as weather and waves. More and more, such centres are moving toward an integrated earth-system modelling approach, where the links between models are seen as essential to improve the quality of the individual models. Wind waves literally are the interface between the atmosphere and the ocean. In a systems design approach, a wind wave model could become an advanced boundary layer module for an integrated atmosphere-ocean modelling system. At ECMWF, a first step into this direction was made more than a decade ago, when their wind wave model started providing real time surface roughness information (including wave-induced roughness) to the weather model. At NCEP, coupled atmosphere-ocean models are used for climate and hurricane prediction. Experimental versions of the hurricane
8â•… Surface Waves
223
model now include a three-way-coupled system, consisting of full weather model (HWRF), a full ocean model (HYCOM) and a full wave model (WAVEWATCH III). A similar system is under development at the Bureau. In such a model the wind waves play a key role; they modify surface roughness and therefore stresses; they may temporarily store momentum extracted from the atmosphere, and release this to the ocean in a geographically distant location; spray generated by waves influences (and links) momentum, heat and mass fluxes between the ocean and atmosphere. Indeed, the most complete estimates of spray production are directly related to the wave spectrum, and hence require a full wave model. Another important forecast problem in which wind waves become important is coastal inundation, where many coastal inundation problems are directly linked to momentum produced by incoming swell rather than by wind pushing up water in a tradition storm surge situation. Several decades of experience with wave-driven coastal circulation and inundation problems can be found in the civil engineering literature, but these experiences have not yet been used in operational forecasting procedures.
References Bidlot J-R, Li JG, Wittmann P, Fauchon M, Chen H, Lefevre J-M, Bruns T, Greenslade DJM, Ardhuin F, Kohno N, Park S, Gomez M (2007) Inter-Comparison of Operational Wave Forecasting Systems. Proceedings of the 10th international workshop on wave hindcasting and forecasting, Oahu, Hawaii, USA, Nov 2007 Booij N, Ris RC, Holthuijsen LH (1999) A third-generation wave model for coastal regions 1. Model description and validation. J Geophys Res 104:7649–7666 Cavaleri L, Alves JHGM, Ardhuin F, Babanin AV, Banner ML, Belibassakis K, Benoit M, Donelan MA, Groeneweg J, Herbers THC, Hwang P, Janssen PAEM, Janssen T, Lavrenov IV, Magne R, Monbaliu J, Onorato M, Polnikov V, Resio DT, Rogers WE, Sheremet A, McKee Smith J, Tolman HL, Van Vledder G, Wolf J, Young IR (2007) Wave modeling—The state of the art. Prog Oceanogr 75:603–674 ECMWF (2008) IFS Documentation—CY33r1, Part VII: ECMWF Wave model. http://www. ecmwf.int/research/ifsdocs/CY33r1/WAVES/IFSPart7.pdf Greenslade DJM (2001) The assimilation of ERS-2 significant wave height data in the Australian region. J Mar Sys 28:141–160 Greenslade DJM, Young IR (2005) The impact of inhomogenous background errors on a global wave data assimilation system. J Atmos Oc Sci 10(2). doi:10.1080/17417530500089666 Hasselmann SK, Hasselmann JH, Allender, BarnettTP (1985) Computation and parameterization of the nonlinear energy transfer in a gravity wave spectrum. Part II: Parameterizations of the nonlinear energy transfer for application in wave models. J Phys Oceanogr 15:1378–1391 Holthuijsen LH (2007) Waves in oceanic and coastal waters. Cambridge University Press, Cambridge Jardine TP (1979) The reliability of visually observed wave heights. Coast Eng 3:33–38 Komen GJ, Cavaleri L, Donelan M, Hasselmann K, Hasselmann S, Janssen PAEM (1994) Dynamics and modelling of ocean waves. Cambridge University Press, Cambridge, p€532 Kundu PK (1990) Fluid mechanics. Academic Press Inc., San Diego Ris RC, Holthuijsen LH, Booij N (1999) A third-generation wave model for coastal regions 2. Verification. J Geophys Res 104:7667–7681 SWAMP Group (1985) Ocean wave modeling Plenum Press, London, p€256 Tolman HL (2008) A mosaic aproach to wind wave modeling. Ocean Model 25:35–47
224
D. Greenslade and H. Tolman
Tolman HL (2009) User manual and system documentation of WAVEWATCH III version 3.14. NOAA/NWS/NCEP/MMAB Technical Note 276. http://polar.ncep.noaa.gov/mmab/papers/ tn276/MMAB_276.pdf Tolman HL, Balasubramaniyan B, Burroughs LD, Chalikov DV, Chao YY, Chen HS, Gerald VM (2002) Development and implementation of wind generated ocean surface wave models at NCEP. Weather Forecast 17:311–333 WAMDIG (1988) The WAM model—A third generation ocean wave prediction model. J Phys Oceanogr 18:1775–1810 Young IR (1999) Wind generated ocean waves. Elsevier Science Ltd, Amsterdam
Chapter 9
Tides and Internal Waves on the Continental Shelf Gregory N. Ivey
Abstract╇ We review recent laboratory experiments, field observations and numerical modeling of internal waves produced by tidal motions, with specific focus on the Australian North West Shelf. Distinct regimes are observed depending upon both the characteristics of the ambient density stratification, the topography, and the intensity of the tidal forcing. The character of the near boundary flow in the region where waves are generated is very important in determining the internal wave response. When cyclones are present, the intense mixing over the water column can suppress the formation of tidally generated internal wave motions for many days.
9.1â•…Introduction Internal waves are ubiquitous in the ocean and can be generated by turbulent stirring (e.g. Munroe and Sutherland 2008) or by mean motion, such as tidal flows over topography (e.g. Baines and Fang 1985). The action of the tide sweeping stratified water over oceanic topography leads to the generation of internal waves of tidal origin (internal tides) which can, in turn, play an important role in deep ocean mixing and large-scale ocean circulation (e.g. Munk and Wunsch 1998; Wunsch and Ferrari 2004) and is the focus of the present paper. Freely propagating internal waves with frequency ω propagate energy in the direction of the group velocity vector at an angle θ to the horizontal given by the dispersion relation
ω2 = N 2 sin2 θ + f 2 cos2 θ ≈ N 2 sin2 θ
(9.1)
where the simplification is valid providing the Coriolis parameter f is small compared to the buoyancy frequency N. An important parameter in the tidal generation G. N. Ivey () School of Environmental Systems Engineering and UWA Oceans Institute, The University of Western Australia, M015 35 Stirling Highway, Crawley, WA 6009, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_9, ©Â€Springer Science+Business Media B.V. 2011
225
226
G. N. Ivey
of internal waves is thus the topographic steepness parameter γ = S/α , where S = hs /ls is the average slope of the topography ( hs and ls are characteristic vertical and horizontal lengthscales) and wave slope is defined as α = tan θ. Note that by this definition γ is an overall parameter and the usual definition of a critical point (e.g. Gostiaux and Dauxois 2007; Zhang et€al. 2008) is when the local bottom slope matches the wave ray slope. In addition to the forcing frequency ω , as the tide is characterized by a tidal velocity U0 , other parameters of importance in internal tide generation are (e.g. Garrett and Kunze 2007) the topographic Froude number F r = U0 /N hs and the tidal excursion parameter U0 /ωls . For subcritical topography (↜γ < 1), and in the limits when U0 /ωls 1 and hs /H 1, linear internal tides are generated (Balmforth et€al. 2002; Bell 1975; Legg and Huijts 2006). As the topography approaches criticality (↜γâ•›=â•›1), the internal tide manifests itself as a beam-like structure, emanating from the critical point on the topography (Gostiaux and Dauxois 2007; Griffiths and Grimshaw 2007), while for U0 /ωls > 1 the response is dominated by higher harmonic frequencies (e.g. Bell 1975). Internal wave motions are commonly observed near continental slopes (Holloway et€al. 2001; Lien and Gregg 2001), seamounts (Lueck and Mudge 1997; Toole et€al. 1997), mid-ocean ridges (Ray and Mitchum 1997) and near continental shelf regions such as the Australian North West Shelf (NWS). The NWS has strong tidal forcing and is home to many vigorous internal wave motions which can play an important role in the energy budget and hence turbulent stirring of the Shelf waters (e.g. Holloway et€al. 2001; Van Gastel et€al. 2009). The NWS region lies in the parameter space γ < 2, U0 /ωls 1, and F r 1. Field measurements are sparse and only provide information at point locations. Therefore it is often difficult to identify the physical generation mechanism of the internal tide and the subsequent internal wave propagation and dissipation in a given region. This paper therefore reviews recent laboratory, field observations and numerical modeling of internal tide generation with specific focus on the NWS. As the NWS is also prone to cyclones during the summer season, we conclude with an examination of cyclone influences on the generation of internal tides.
9.2â•…Laboratory Models Most laboratory studies have focused on the process of internal tide generation in a continuously stratified fluid where, near critical points (↜γâ•›=â•›1), the internal tide manifests itself as a beam-like structure emanating locally parallel to the bottom from the critical point on the topography (e.g. Gostiaux and Dauxois 2007; Peacock et€al. 2008; Zhang et€al. 2008). Two recent studies (Lim et€al. 2008, 2010), have examined the generation process by both varying γ and the intensity of forcing relative to turbulent stirring. Characterizing the turbulence in the near bottom boundary layer with an eddy viscosity K, the effect of forcing can be characterized by a local Reynolds number defined as Re = U02 /(NK) (Legg and Klymak 2008). The upper range of barotropic forcing, and hence Re, examined in these studies is consider-
9â•… Tides and Internal Waves on the Continental Shelf
227
$VLQZW K
KV
K
KV /V
a
7ZROD\HUVWUDWLILFDWLRQ $VLQZW
KV /V
b
&RQWLQXRXVVWUDWLILFDWLRQ
Fig. 9.1↜渀 Configuration of laboratory experiments by Lim et€al. (2008, 2010). The vertically oscillating plunger at the left end generates an oscillating barotropic flow over the slope/shelf topography at the other end of the tank. Experiments were done with a two-layer density stratification and b continuous density stratification
ably larger than previous experimental studies and the experimental configuration is shown in Fig.€9.1, where both idealized two layer and continuously stratified versions of the stratification were used. In the two layer experiments, Lim et€al. (2008) documented differing responses √ delineated by two parameters: a Froude number F r = U0 / g hE and the layer depth ratio on the shelf β = h1 /h1 + h2S (see Fig.€9.1). Their classification scheme is shown in Fig.€9.2. If the upper layer depth on the shelf was thin (i.e. β < 0.5 ), linear internal waves of depression moved onto the shelf. Only if the lower layer depth on the shelf was thin (i.e. β > 0.5 ) were strongly non-linear waves observed with both surges and distinct bores present. In this latter category, the Froude number also became important, and with strong tidal forcing when F r → 1 there was no internal wave response observed at all on the shelf. In the continuous stratification case, three types of basic flow response were observed by Lim et€ al. (2010) over the parameter range of the experiments ( 0.3 < γ < 2.2, 1 < Re < 480 ): beams, bolus structures, and finally no waves. The presence of both a critical point in the domain and a stable boundary layer with flow parallel to the boundary were found to be fundamental criteria for beam generation. With the oscillating mean flow locally parallel to the boundary, for locally critical conditions, the movement of fluid in the bottom boundary was along the direction
228
G. N. Ivey
5HJLPH$ ZDYHRIGHSUHVVLRQ KV
5HJLPH' QRZDYHV KV!!
)U
5HJLPH% LQWHUQDOVXUJH KV
$XVWUDOLDQ1:6 1RUWK5DQNLQUHJLRQ
5HJLPH& LQWHUQDOEROXV KV!
5HJLPH'QRZDYHV
Fig. 9.2↜渀 Two √ layer regime classification scheme (from Lim et€ al. 2008). The Froude number F r = U0 / g hE where the equivalent depth hE = h1 h2S /(h1 + h2S ), β = h1 /h1 + h2S and the depth scales are defined in Fig.€9.1a
of the wave characteristic slope (see Eq.€(9.1)) tangent to the local bottom slope. If there was no critical slope present in the domain, then there was no internal beam generation. The region of beam generation was shown to occur within a finite length of the slope ( 0.75 < s/scrit < 1.3 ), and this length was approximately twice the local near-bottom fluid excursion. The velocities along the wave characteristic was shown to be elevated, consistent with previous field studies (e.g. Holloway et€al. 2001; Lien and Gregg 2001) and laboratory (e.g. Peacock et€al. 2008; Zhang et€al. 2008) observations. Increasingly energetic conditions led to the generation of a bolus (e.g. Venayagamoorthy and Fringer 2007) causing much over-turning and stirring as it propagated upslope and dissipated. Lim et€al. (2010) found the overall flow behaviour varies with both Re and γ, but the two non-dimensional parameters can be combined to define a single generation parameter G 1/2 U02 ω2 Re U02 ω = G= ≈ (9.2) γ NKS N 2 − ω2 N 2 KS where the simplification is valid since ω2 N 2 in the laboratory.
9â•… Tides and Internal Waves on the Continental Shelf
229
VPDOOHUIRUFLQJ IUHTXHQF\ VWURQJHU VWUDWLILFDWLRQ
,QWHUQDO%HDPV *
γ
,QWHUQDO%ROXV *
ODUJHUIRUFLQJ IUHTXHQF\ ZHDNHU VWUDWLILFDWLRQ
1R:DYHV
ORZHUIRUFLQJ
5H
KLJKHUIRUFLQJ
Fig. 9.3↜渀 Regime diagram for continuous stratification (see also Lim et€al. 2010). The parameter γ = S/α and Re = U02 /N K, and G is defined in Eq.€(9.2)
The general trend is that as the forcing increases the response changes from linear beams to nonlinear bolus features and finally to no waves at all. A summary of the observed behaviour is shown in Fig.€9.3. Beams required a critical slope to be present and were observed in the regime G╛<╛80. The absence of beams beyond this range was due to the disruption of the flow field caused by the appearance of an upslope-propagating bolus, even though a local critical point was present. There is some overlap in regimes, but the bolus feature was formed over a wider range of forcing (5╛<╛G╛<╛600). Finally, for large forcing frequencies and amplitudes, no waves were formed as the rapidly oscillating barotropic forcing completely dominates the flow. So in summary, distinct regimes are observed in both the two layer and continuous stratification cases, and the character of the near boundary flow at generation region is important in determining the baroclinic response.
9.3╅Field Scale Observations In recent years there has been much focus on the process of internal wave generation and propagation on the NWS from both an observational (e.g. Van Gastel et€al. 2009) as well as a numerical modelling perspective (e.g. Meuleners et€al. 2011). The actual density stratification on the NWS is a combination of the two idealised limits considered in the laboratory experiments above, and thus the observed response
230
G. N. Ivey
a€combination of both direct generation of large scale internal waves in the thermocline offshore of the shelf break and beam-like internal waves propagating upwards from critical points on the slope, at typically 500€m depth scale, and being trapped in the ever-shallowing shelf waters. This is most clearly seen in the numerical experiments using ROMS described by Meuleners et€al. (2011). ROMS was run in a domain of 800€km by 500€km along the coast with a nominal 2.2€km grid resolution. Model minimum depth was set to 20€m and there were 70 sigma layers in the vertical staggered to increase resolution near the bottom. The domain consisted of three open boundaries on the northern, southern and western extents, and at these boundaries tidal forcing was applied using the TPXO.7.1 tide model, and daily averaged wind and heat fluxes applied at the surface from NCDC and NECEP/NCAR reanalysis data sets, respectively. Initial density fields and boundary conditions were supplied from the BRAN version 2.1 (Oke et€al. 2008). Runs were made in conditions representative of summer 2004 in order to compare with a field experiments conducted at the time (Van Gastel et€al. 2009). Figure€9.4 shows a typical snapshot from the simulations along a transect extending offshore through the North Rankin A oil and gas platform—site of the field experiments in 124€m of water. The transect shows internal wave generation occurring near critical points near depths of 400–600€m and approximately 70€km seaward of NRA. Internal wave beams originating from this slope then forward reflect, although note the actual aspect ratio of the beam has vertical to horizontal scale ratios of approximately 1:50. Additionally, small amplitude but large horizontal wavelength depressions of the thermocline occurs offshore of the shelf break during ebb tide.
Fig. 9.4↜渀 Snapshot of the total velocity field (color scale on right, red onshore and blue offshore) overlain over isotherms (↜thin black lines) along a vertical transect at bearing of 315° through NRA platform. Time is 12 a.m. on 24th March 2004 from ROMS solution. (See Van Gastel et€al. 2009 for details)
9â•… Tides and Internal Waves on the Continental Shelf
231
This depression then self-steepens as the wave moves on shore. Model simulations over many tidal cycles shows it takes approximately 36€h (three tidal cycles) for an internal wave to travel from the offshore generation region to NRA, which is consistent with the steepening timescale estimate of Horn et€al. (2001). The two forms of internal wave eventually form the highly non-linear large amplitude internal waves in the thermocline in shallow inshore waters seen at NRA, for example (Fig.€9.5). These waves of the type shown in Fig.€ 9.5 are significantly non-hydrostatic, and cannot be described by hydrostatic codes like ROMS. Van Gastel et€al. (2009) describes local measurements in the 124€m water depth at NRA, and found peak wave amplitudes were as large as 80€m and associated phase speeds of packets up to 1€ m/s. These waves are stronger in summer than winter and as the modelling showed, due to the curved nature of the offshore bathymetry, the process of wave generation by the tide is three dimensional along an arc of approximately 120€km in length with a focussing of wave energy towards NRA.
Fig. 9.5↜渀 Observed large amplitude internal waves (LAIW) at NRA on March 6th 2004. Total water depth is 124€m and upper panel shows density contours over 14€h showing the arrival of LAIW of depression at about 0700€h. The lower panel shows the near bottom high-pass filtered (at 3€h) currents at 5€m ASB. (See also Van Gastel et€al. 2009)
232
G. N. Ivey
9.4╅Internal Waves and Cyclones In addition to tides, the NWS is also forced by cyclones during the summer season. Davidson and Holloway (2003) were the first to study the influence of TCs on internal tide generation on the NWS. The model studies of Cyclone Bobby by Condie et€al. (2009) examined the biological productivity and found the phytoplankton response is limited by cyclone-induced sediment resuspension and hence nutrient input to the water column. Recently, Zed (2007) used the Regional Ocean Modelling System (ROMS) to investigate the induced currents and the vertical mixing under the influence of TC Monty in 2004.
Fig. 9.6↜渀 ROMS modeling of passage of cyclone Monty in February 2004. All figures show snapshots of the total velocity field (color scale on right, red onshore and blue offshore) overlain over isotherms (↜thin black lines) along a vertical transect at bearing of 315° through NRA platform. Dates and times are shown on each figure. a 26 February at 2100 h. b 27 February at 0200 h. c 28 February at 2300 h. d 29 February at 0500 h. e 29 February at 1900 h. The closest approach of the cyclone to NRA was on 29 February at 0500€h shown in (d)
9â•… Tides and Internal Waves on the Continental Shelf
Fig. 9.6↜渀 (continued)
233
234
G. N. Ivey
Wind and pressure forcing for Monty was supplied with high spatial and temporal resolution data from the double vortex model CycWind (McConochie et€al. 2004), with the wind stress on the water surface estimated using the wind speed dependent surface drag coefficient formulation used by Davidson and Holloway (2003). Simulations were conducted for 2 weeks (the period of the cyclone). Results (Fig.€9.6) showed that wind forcing was so strong that large portions of the NWS were well mixed, essentially up to 150€m water depth and up to 150€km offshore (Fig.€9.6d). These conditions can persist, along with strong residual currents, for timescales up to 10 days after the cyclone has crossed onto the coast and no longer is providing direct wind forcing. Thus for these timescales of order 10 days, while near-inertial internal waves are directly excited by the wind, the absence of density stratification in shallow shelf waters completely suppressed the normally dominant tidally-generated internal waves from forming. Indeed until the stratification is able to reform, the tides are only able to induce a barotropic oscillation and no significant internal waves of tidal origin are seen over a large domain.
References Baines PG, Fang XH (1985) Internal tide generation at a continental shelf/slope junction:a comparison between theory and a laboratory experiment. Dyn Atmos Oceans 9:297–314 Balmforth NJ, Ierley GR, Young WR (2002) Tidal conversion by subcritical topography. J Phys Oceanogr 32:2900–2914 Bell TH (1975) Topographically generated internal waves in the open ocean. J Geophys Res 80:320–327 Condie SA, Herzfeld M, Margvelashivili N, Andrewartha JR (2009) Modelling the physical and biogeochemical response of a marine shelf system to a tropical cyclone. Geophys Res Lett 36, L22603, p€6 Davidson FJM, Holloway PE (2003) A study of tropical cyclone influence on the generation of internal tides. J Geophys Res 108:3082 Garrett C, Kunze E (2007) Internal tide generation in the deep ocean. Annu Rev Fluid Mech 39:57–87 Gostiaux L, Dauxois T (2007) Laboratory experiments on the generation of internal tidal beams over steep slopes. Phys Fluids 19, 028102, pp€1–4 Griffiths SD, Grimshaw RHJ (2007) Internal tide generation at the continental shelf modeled using a modal decomposition: two-dimensional results. J Phys Oceanogr 37:428–451 Holloway PE, Chatwin PG, Craig P (2001) Internal tide observations from the Australian North West Shelf in summer 1995. J Phys Oceanogr 31:1182–1199 Horn DA, Imberger J, Ivey GN (2001) The degeneration of large-scale interfacial gravity waves in lakes. J Fluid Mech 434:181–207 Legg S, Huijts KMH (2006) Preliminary simulations of internal waves and mixing generated by finite amplitude tidal flow over isolated topography. Deep Sea Res. II Top Stud Oceanogr 53:140–156 Legg S, Klymak J (2008) Internal hydraulic jumps and overturning generated by tidal flow over a tall steep ridge. J Phys Oceanogr 38:1949–1964 Lien RC, Gregg MC (2001) Observations of turbulence in a tidal beam and across a coastal ridge. J Geophys Res 106:4575–4591
9â•… Tides and Internal Waves on the Continental Shelf
235
Lim K, Ivey GN, Nokes RI (2008) The generation of internal waves by tidal flow over continental shelf/slope topography. Environ Fluid Mech 8:511–526 Lim K, Ivey GN, Jones RI (2010) Experiments on the generation of internal waves over continental shelf topography. J Fluid Mech 663:385–400 Lueck RG, Mudge TD (1997) Topographically induced mixing around a shallow seamount. Science 276:1831–1833 McConochie JD, Hardy TA, Mason LB (2004) Modelling tropical cyclone over-water wind and pressure fields. Ocean Eng 31:1757–1782 Meuleners M, Ivey GN, Fringer O, Van Gastel P (2011) Tidally generated internal waves on the Australian North West Shelf. J Cont Shelf Res (submitted) Munk W, Wunsch C (1998) Abyssal recipes II: energetics of tidal and wind mixing. Deep Sea Res. Part I Oceangr Res Pap 45:1977–2010 Munroe JR, Sutherland BR (2008) Generation of internal waves by sheared turbulence: experiments. Environ Fluid Mech 8:527–534 Oke PR, Brassington G, Griffin DA, Schiller A (2008) The Bluelink ocean data assimilation system. Ocean Model 21:46–70 Peacock T, Echeverri P, Balmforth NJ (2008) An experimental investigation of internal tide generation by two-dimensional topography. J Phys Oceanogr 38:235–242 Ray RD, Mictchum GT (1997) Surface manifestation of internal tides in the deep ocean: observations from altimetry and island guages. Prog Oceanogr 40:135–162 Toole JM, Schmidt RW, Polzin KL, Kunze E (1997) Near-boundary mixing above the flanks of a mid-latitude seamount. J Geophys Res Oceans 102:947–959 Van Gastel P, Ivey GN, Meuleners M, Antenucci JP, Fringer OB (2009) Seasonal variability of the nonlinear internal wave climatology on the Australian North West Shelf. Cont Shelf Res 29:1373–1383 Venayagamoorthy SK, Fringer OB (2007) On the formation and propagation of nonlinear internal boluses across a shelf break. J Fluid Mech 577:137–159 Wunsch C, Ferrari R (2004) Vertical mixing, energy, and the general circulation of the oceans. Annu Rev Fluid Mech 36:281–314 Zed M (2007) Modelling of tropical cyclones on the north west shelf. Honours Thesis, School of Environmental Systems Engineering, University of Western Australia, p€128 Zhang HP, King B, Swinney HL (2008) Experimental study of internal gravity waves generated by supercritical topography. Phys Fluids 100, 244504, pp€1–4
Part IV
Modelling
Chapter 10
Eddying vs. Laminar Ocean Circulation Models and Their Applications Bernard Barnier, Thierry Penduff and Clothilde Langlais
Abstract╇ Mesoscale eddies are ubiquitous and very energetic features of the ocean circulation. They are represented in the high resolution models used for ocean forecasting, but not yet in today’s laminar, coarse-resolution ocean components of models of the climate system. However, advances in high performance computing are likely to change this in a near future, as the next decade should see the use of eddying models to become more and more frequent in the broader context of the Earth system modelling. This lecture discusses mesoscale eddies in models of different resolution. The course is organised as follows. Section€10.1 introduces the notion of mesoscale eddies by an illustration of the ubiquity of oceanic eddies from satellite observations. Then, it provides a definition of ocean mesoscale eddies by analogy with the atmospheric synoptic eddies. The main impacts of ocean mesoscale eddies on the general circulation are recalled. Section€10.2 discusses some ocean model fundamentals that link primitive equations with resolution, parameterisation and numerics. The separation between resolved and unresolved scales that results from the choice of grid resolution is discussed, a definition of eddying and laminar resolution models is provided, and the notion of subgridscale parameterization is illustrated with the example of the mesoscale eddies. Section€ 10.3 illustrates the tight link that exists between resolution and numerics. Examples are shown where the use of advanced numerical schemes improves model solutions in a more drastic way than increases in resolution. Section€10.4 uses the DRAKKAR hierarchy of global ocean circulation models (whose resolutions vary between 2 and 1/12°) to illustrate how the changes in resolution impact the realism of model simulations, in terms of mean state and variability. The conclusion summarizes the major items discussed during the class.
B. Barnier () Laboratoire des Ecoulements Géophysiques et Industriels, Centre National de la Recherche Scientifique, Université Joseph Fourier, Grenoble I, France e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_10, ©Â€Springer Science+Business Media B.V. 2011
239
240
B. Barnier et al.
10.1â•…Introduction This course deals with resolution issues in numerical models of the ocean circulation, and more specifically with the representation by these models of the mesoscale eddies. Therefore, two classes of models are distinguished in this lecture. One class of models uses “coarse-resolution” grids and solutions produced are characterised by “laminar” dynamical regimes where mesoscale eddies cannot emerge. The other class of models uses “high-resolution” grids, and solutions produced are characterised by “eddying” regimes where mesoscale eddies can emerge and develop. In this paper, the words “laminar” and “eddying” are used to qualify the dynamical regimes of the flows produced by the models, whereas the words “coarse” and “high” are used to label the resolution of the numerical grids.
10.1.1 Ubiquity of Eddies in the Ocean The ubiquity of eddies in the ocean has been demonstrated by a large numbers of studies using satellite (e.g. altimetry, see Chelton et€ al. 2007) and in-situ instruments. This is illustrated in Fig.€ 10.1a which presents the Sea Level Anomalies (↜SLA) observed by satellite altimetry on 19/05/2004. The ocean appears full of features of a few hundred kilometres scale. These observed features live typically from a few days to a few months, some features persisting for more than two years (Chelton et€al. 2007). While atmospheric and topographic influences also play a role, these eddies are thought to be mostly generated by the instabilities of major currents; this explains why a greater concentration of strong mesoscale features is found in the vicinity of boundary currents and their offshore extension (e.g. Gulf Stream, Kuroshio), and in the Antarctic Circumpolar Current. These features are also ubiquitous in the centre of the subtropical gyres and in the eastern part of mid-latitude ocean basins (also prone to dynamical instabilities). Mesoscale eddies are also generated along the equator (these are significantly larger and often more anisotropic) and in the interior of the ocean (near topographic obstacles, or through shear instabilities, etc). In short, modern observations reveal a “sea of mesoscale eddies” whose general characteristics relate to larger-scale circulation patterns, and which are separated by smaller (so-called sub-mesoscale) dynamical structures with enhanced anisotropy.
10.1.2 Ocean Mesoscale Eddies: A Definition Ocean mesoscale variability in the ocean appears in a variety of transient features such as eddies, meanders, rings, waves and fronts with space scales of a few 10th to 100th€ km and time scales of 10–100 days. Ocean eddies spontaneously arise from the hydrodynamic instability of the major large scale current systems, as do
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
241
Fig. 10.1↜渀 Sea level anomaly (in meters) on 19/05/2004, (a) as observed by satellite altimetry (Aviso product), and as simulated by (b) a 1/4° eddy-permitting global model (Drakkar model ORCA025, run series G70) and (c) a 2° coarse resolution global model (Drakkar model ORCA2, run series G70). Both models use the same numerical code (NEMO) and the same atmospheric forcing (DFS3, see text)
242
B. Barnier et al.
the atmospheric synoptic features from instability of the large scale wind systems. Ocean mesoscale eddies are often described as being the “weather system” of the global ocean, by a dynamical analogy with the synoptic features of the atmosphere (McWilliams 2008). Let us consider a mean oceanic or atmospheric flow in a vertically-stratified fluid. The vertical stratification is characterised by the Brunt-Vaïsala frequency N:
N2 =
g ρ H ρ0
(10.1)
H is the characteristic scale of the vertical shear of the mean flow (e.g. the thickness of the ocean main thermocline or the atmospheric troposphere height), 0 a reference density, ∆ the bulk density gradient over H, and g the gravitational acceleration. Let us define U as the eddy characteristic velocity scale, L as the characteristic eddy horizontal scale, and f the Coriolis parameter. Dynamically, ocean mesoscale (and atmospheric synoptic) eddies can be defined as features that: • are in quasi-geostrophic equilibrium (i.e. a small Rossby number):
R0 =
U 1 fL
(10.2)
• have a characteristic velocity small compared to the celerity of (internal) gravity waves (i.e. a small Froude number):
U U Fr = √ = 1 NH gH
with
g = g
ρ ρ0
(10.3)
• are generated by instabilities of the large scale flow, and as such, are equally influenced by stratification and rotation (i.e. order one Burger number):
Bu =
R02 = Fr2
NH fL
2
= O(1)
(10.4)
The characteristic eddy horizontal scale can be readily estimated from the equation Buâ•›=â•›1 as:
L=
NH f
(10.5)
Typical atmospheric values for Nâ•›=â•›10−2€s−1 and Hâ•›=â•›104 m yield a mid-latitude (i.e. fâ•›=â•›10−4€s−1) synoptic eddy length-scale of Latmâ•›=â•›1,000€km. Typical ocean values of Nâ•›=â•›5â•›×â•›10−3€s−1 and Hâ•›=â•›103€m yield a mesoscale eddy length-scale of Loceâ•›=â•›50€km. The typical eddy length-scale is therefore 20 times smaller in the ocean.
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
243
The dynamical impact of these eddies on the global circulation is likely to be very different, at least quantitatively, in the two fluids, and the analogy between ocean and atmospheric eddies might not hold beyond this scale analysis. For example, it is clear that atmospheric eddies, because of their much greater size, are much more efficient in transporting heat from subtropical to subpolar latitudes. Indeed, atmospheric synoptic eddies are almost responsible for the whole poleward heat transport from 30 to 60° latitude, and the necessity to resolve these features in atmospheric general circulation models was never questioned. The smaller size of ocean eddies suggests a weaker effectiveness in transporting heat (and salt) poleward, especially since a large part of the meridional heat transport is done (at least in part of the northern hemisphere ocean) by mean currents flowing poleward along continents (continents allow the maintenance of strong, localized zonal pressure gradients in the ocean geostrophically balanced with poleward boundary currents). Although the importance of the eddy transport is recognized, the necessity of resolving eddies in ocean general circulation models (OGCMs) is still under debate, and models used for climate prediction are still laminar, i.e. using coarse-resolution grids, relying on parameterisations to account for the effects of mesoscale eddies on larger scales.
10.1.3 Importance of Mesoscale Eddies Eddies are ubiquitous features in the ocean and contain a great part of the kinetic energy of the ocean. Similarly to the atmospheric weather systems, their transport of energy is crucial to the dynamical balance of the circulation at global scale. They are important because they feedback on the large scale circulation and have a significant contribution to the total ocean heat fluxes. Eddy processes also have an important impact on the generation and maintenance of strong currents and fronts and of water mass physical and biogeochemical properties, as they are major actors of air-sea exchanges, isopycnal dispersion and mixing, density re-stratification, ventilation and subduction, energy cascade and dissipation, topographic form stress, etc. (McWilliams 2008). Eddies are also source of intrinsic climate variability. They have great impacts on marine ecosystems, and are very important for operational ocean applications such as marine safety, pollution dispersion, offshore industry, fisheries, etc.
10.2╅Some Resolution Issues in Ocean Models 10.2.1 Resolved and Unresolved Scales of Motion Ocean models fundamentals have been recently thoroughly revisited in a series of papers by (Griffies 2004; Griffies et€al. 2005; Treguier 2006; Griffies and Adcroft 2008). The present course focuses on the resolution issue, and the reader should
244
B. Barnier et al.
refer to the above papers for more elements on the equations and numerical algorithms used in numerical models of the ocean general circulation. Ocean general circulation models usually solve for the so-called primitive equations (Madec 2008; for example), which are an approximation of the Navier-Stokes equations, and a nonlinear equation of state which couples two active tracers (temperature T and salinity S) with density . The most important assumptions, based on scale considerations, are (i) the thinshell (or shallow water) approximation (the ocean depth is much smaller than the earth’s radius), (ii) the Boussinesq approximation (density variations are neglected except in their contribution to the buoyancy force), (iii) the hydrostatic hypothesis (the vertical momentum equation is reduced to a balance between the buoyancy force and the vertical pressure gradient), and (iv) the incompressibility hypothesis (the three dimensional velocity vector is non divergent). For the purpose of this course, the Primitive Equations (PEs) are written as in (Treguier 2006): ∂Y + u · ∇Y + F (Y ) = 0 ∂t
(10.6)
uâ•›=â•›(↜u, v, w) is the 3D velocity vector, Yâ•›=â•›(u, T, S) is the prognostic continuous state vector of the ocean, and F(↜Y) includes all other terms of the PEs, including the Coriolis force, the pressure gradient force, the external forcing, etc. Because this lecture focuses on mesoscale eddies, one shall consider that the F(↜Y) term in (10.6) also includes the parameterisation of diapycnal mixing induced by small scale 3-D turbulence (see (Large et€al. 1994; Large 1998) for a review of small scale turbulence closure models). A more standard form of the primitive equations is given in the Appendix. Equation€(10.6) is solved numerically, which means that PEs are discretised on a grid using finite difference schemes (or other numerical methods). Solving the PEs numerically means applying a “discretisation” operator ( )R to the state vector Y and its equation of evolution (10.6), which yields:
∂YR + (u · ∇Y )R + (F (Y ))R = 0 ∂t
(10.7)
Where YRâ•›=â•›(↜Y)R is the model solution (or i.e. a discrete representation of the state of the ocean). Following Treguier (2006), (10.7) can be rewritten as: ∂Y R + uR · ∇YR + FR (YR ) = − ((u · ∇Y )R − uR · ∇YR ) − ((F (Y ))R − FR (YR )) ∂t (10.8)
Evolution equation of the resolved state of the ocean
Effects of the unresolved scales on the resolved state of the ocean
The numerical model integrates (10.8) in time, providing successive values of YR, the discrete state of the ocean, at spatial scales larger than the grid step and at dis-
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
245
crete times. Note that the evolution equation for the discrete state (10.8) has the same left-hand side as its continuous counterpart (10.6), but with additional contributions on the right-hand side (↜RHS) that represent the effects of unresolved scales on the resolved model solution. The definition of the resolved and unresolved scales involves averaging operators (Griffies 2004). The RHS term of (10.8) is generally unknown. A solution used in models to calculate this term often consists in an empirical relationship or a physically based model (i.e. a parameterisation or a subgridscale model). Such models express the contribution of unresolved processes on the resolved state following an approach similar to the “turbulent closure hypothesis” that yields higher-order moments as a function of lower-order moments (Lesieur 2008).
10.2.2 Eddying Versus Laminar Ocean Models Choosing the grid resolution of an ocean general circulation model (OGCM) is formally equivalent to the choice of an appropriate averaging operator (low-pass filtering at the grid step) and an appropriate approach to estimate the contribution of smaller scales (i.e. the RHS of (10.8)). If the operator ( )R has the properties of a Reynolds Operator, i.e. if the unresolved (or subgridscale) part of the ocean state vector Y, defined as Y'â•›=â•›Y−YR, verifies (↜Y')Râ•›=â•›0 (and if the flow verifies properties of stationarity and ergodicity, see (Lesieur 2008 for details), then the part of the unresolved effects that corresponds to the non linear advection (i.e. the first term in the RSH of (10.8)), can be expressed in the form of a divergence of eddy fluxes: ((u · ∇Y )R − uR · ∇YR ) = ∇ · u Y R (10.9) We do not discuss in this course the treatment of the second term of the RHS of (10.8), which includes the unresolved but nonetheless important effects of the forcing. In the following we assume that it is included in the term FR(↜YR). The discretised model Eq.€(10.8) becomes:
∂YR + uR · ∇YR + FR (YR ) = −∇ u Y R ∂t
(10.10)
The RHS of (10.10) (namely the “Reynolds stresses” if Y is momentum, and the turbulent heat fluxes if Y is potential temperature) is a matter for eddy parameterisation. When choosing the resolution of a model, one has to answer the question of which “eddy effects” should be explicitly simulated to address the given scientific question. For example, one expects different answers to forecast ocean currents and fronts in the short term (in that case, eddies should be resolved explicitly, requiring fine, computationally-expensive grids), or to simulate multi-decadal changes of the ocean meridional heat transport, (in which case eddy effects may be parameterised on coarse, computationally-efficient grids).
246
B. Barnier et al.
Fig. 10.2↜渀 Variation of the horizontal grid resolution (in km) with latitude in the Drakkar hierarchy of global models (resolutions from 2 to 1/4°). The dashed green line shows the variation of the first radius of deformation (~ the eddy length scale). The full (dashed) black lines show the size of the computational mesh in the meridional (zonal) direction. The coarse resolution models (2, 1 and 1/2°) have a meridional mesh that is finer than the eddy scale in the equatorial band only (local meridional refinement of 2 and 1° grids). The 1/4° (eddying) model has a grid mesh finer than the first radius of deformation between 40°N and 40°S. An “eddy resolving” model at all latitudes should aim at a resolution of the order of 10€km or better at the equator. This is almost obtained by the 1/12° Drakkar model configuration under development (adapted from Penduff et€al. 2010)
A model will be “eddying” if it uses • a horizontal grid mesh, whose resolution is fine enough to let mesoscale dynamics emerge, i.e. baroclinic and barotropic instability processes are explicitly (albeit partly) resolved; • an appropriate representation of the effects of the unresolved (smaller) scales on resolved (mesoscale) features. In practice, this means having a grid mesh finer than the eddy length scale (see Fig.€10.2, typically a grid mesh of the order of 10€km or finer for an eddy scale (10.5) of 50€km). The effects of the unresolved scales on the mesoscale dynamics are often parameterised by a hyper-viscosity (e.g. biharmonic), an approach that ensures numerical stability but that is not fully satisfactory physically. Research is underway to develop more consistent alternatives (e.g. Frisch et€al. 2008). A model will be “laminar” if it uses • a coarse horizontal grid mesh whose resolution does not let mesoscale features emerge; • an appropriate representation of the effects of the mesoscale features on resolved (e.g. basin-scale) features. In practice, this means having a mesh looser than the eddy length scale (see Fig.€10.2, typically a grid mesh of the order of 50–100€km for an eddy scale (5) of 50€km). In coarse resolution modelling, the effects of the mesoscale scales on the large-scale dynamics rely on parameterisations that should account for both the diffusive and the advective effects of mesoscale eddies.
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
247
Note that the same Eq.€(10.10) in integrated in time in both “eddying” and “laminar” models, but unresolved features have different spatial scales in each case: their impact on resolved scales must be represented by distinct parameterisations in both classes of models (opening two separate ways of model developments). The next section presents a parameterisation of mesoscale eddies that is widely used in laminar models.
10.2.3 Parameterisation of Mesoscale Eddies in Laminar Models The representation of the effects of the mesoscale eddies on the large scale ocean circulation is a key issue for ocean models used for modelling the climate system. As we have mentioned in the introduction, these effects are numerous. However, there is no parameterisation that accounts for all of them, and the parameterisation of mesoscale eddies is an active area of fundamental research (Eden and Greatbatch 2008; Zhao and Vallis 2008). The ability of mesoscale eddies to mix tracers along isopycnal surfaces is, among all eddy properties, one that has the greatest impact on the density and tracer distribution at the large scales, and it must be parameterised in coarse resolution ocean climate models. In this course, we discuss a now classical approach to mimic this effect. Writing (10.10) for potential temperature T yields:
∂TR + uR · ∇TR = −∇ · u T R + DT + FT ∂t
(10.11)
Consistent with the notations used in the course, TR and uRâ•›=â•›(↜uR, vR, wR) are the potential temperature and current velocity vector calculated by the model on a coarse discrete mesh, so (u′Tâ•›′)R represents the unresolved (mesoscale) eddy fluxes of heat, whose divergence must be estimated to close the equation. The terms DT and FT denote the diapycnal fluxes and the forcing (the FR(↜YR) term of (10.10)). Parameterising the mesoscale eddy fluxes means formulating their effects on the model solution TR with a physically based theoretical model. Such model commonly consists in a relationship linking the eddy fluxes (u′Tâ•›′)R with the gradients of the resolved scales ∇TR. This relationship can be formally written in a tensor form as follows: uT ∂TR −(u T )R = − v T = τij ∂j w T R ∂TR /∂x τxx τxy τxz = τyx τyy τyz . ∂TR /∂y (10.12) ∂TR /∂z τzx τzy τzz ij is the mixing tensor, and x,y,z indicate the principal mixing directions (here for simplicity the principal axes of the coordinate system). Following Müller (2006),
248
B. Barnier et al.
the mixing tensor is split into a symmetric part, Kij, and an anti-symmetric part, Sij, such that (10.12) writes as:
τij
∂TR ∂TR ∂TR + Sij = Kij ∂j ∂xj ∂xj
(10.13)
Still following Müller (2006), the contribution of the symmetric tensor Kij to the flux divergence can be expressed in the form of a Laplacian diffusion of heat (as KT∆TR), and that of the anti-symmetric (or skew) tensor Sij can be expressed as a simple advection of heat by a skew (or bolus) velocity vector V* (as −V*⋅∇TR). The divergence of the eddy fluxes (10.12) may thus be written as:
−∇ u T R = KT TR − V∗ · ∇TR
(10.14)
The challenge of the parameterisation of the mesoscale eddy fluxes is then reduced to the determination of KT (a turbulent diffusion coefficient) and V* the bolus velocity. In practice in ocean models, the Laplacian diffusion of heat (the first term in the RHS of (10.14)) acts only along isopycnal surfaces (Redi 1982) to account for the interior isopycnal mixing made by mesoscale eddies. Its contribution in the diapycnal direction is neglected in front of the vertical mixing induced by small scale 3D turbulence. The value of the diffusion coefficient KT is generally user- and application-dependent, and may be constrained by numerical stability considerations. An expression for second term of the RHS in (10.14) is provided by the GM90 parameterisation (Gent and McWilliams 1990) which mimics the effects of the (unresolved) eddy advection of heat on the resolved (large-scale) buoyancy field. This parameterisation uses the local isopycnal slopes to define a 3D, non divergent bolus velocity V*â•›=â•›(↜u*, v*, w*) in the following form: − − →− → →− → ∗ ∗ ∗ ∂ ∂ ∇ ρ∇ ∇ ∇ ρ − → − → − →− → − →− → R ρR R ρR u , uv , v= = K∗ K∗ =∗ ∇ = H∇· H K · ∗ K∗ ; w∗; w ; ; ∇ ·∇V· ∗V=∗ 0= 0 ∂z ∂z ∂ρR∂ρ /∂z /∂z ∂ρ ∂ρ /∂z /∂z R R R (10.15)
where R is the resolved density field. The effect of V* is to release potential energy from the large scale flow in a way that is physically consistent with the way mesoscale eddies generated by baroclinic instability flatten isopycnals, i.e. extract potential energy from the mean flow. In other terms, eddies induce a downgradient diffusion of the thickness of isopycnal layers along isopycnal surfaces, with a diffusion coefficient K*. In summary, the temperature equation that is solved with the GM90 eddy parameterisation has the following form (the same holding for salinity):
∂TR + (uR + V∗ ) · ∇TR = +KT ρ TR + FT ∂t
(10.16)
∆ indicates the two dimensional Laplacian operator acting along the local isopycnal surfaces defined by (Redi 1982). All terms in (10.16) are now expressed
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
249
as a function of the resolved variables TR and uR, eddy effects are accounted for by the Laplacian diffusion term, and the bolus velocity term. Both latter terms have been derived under the physical assumption that eddies mix properties along isopycnal surfaces, and have been expressed using the mathematical formalism of the mixing tensor. The parameterisation of the eddy fluxes on the resolved density field is then reduced to the determination of the temperature and thickness diffusion coefficients, KT and K*. The evaluation of these coefficients is still partly empirical. The GM90 parameterisation has been proved to significantly improve the large-scale density field and circulation in coarse resolution simulations, and is the most widely used parameterisation of mesoscale eddies in ocean climate models.
10.3â•…Advanced Numerical Schemes and Resolution Although the GM90 parameterisation and its variants evolutions are a recognized improvement for coarse resolution models, many eddy effects are not yet accounted for in these models. In addition, more and more ocean model applications require an explicit resolution of mesoscale eddies, especially in ocean forecasting. Consequently, the use of eddying models is rapidly growing. But increasing resolution is not as simple as increasing the grid mesh. As finer space and time scales appear in model solutions when resolution increases, the subgridscale parameterisations need to evolve (since they will have to account for the effects of different unresolved physical processes, e.g. submesoscale effects for eddying models). Numerical algorithms used to solve the equations may also need to be adapted to the new physical processes arising, although the formulation of the problem does not change. This latter aspect that links resolution and numerics is illustrated in this part of the course with the momentum advection scheme. The momentum equation of the PEs, written in a standard form (see appendix for notations), is: Zonal momentum
1 ∂P ∂u + (u · ∇)u − f v = − + Du + Fu ∂t ρ0 ∂x
(10.17a)
Meridional momentum
1 ∂P ∂v + (u · ∇)v + f u = − + Dv + Fv ∂t ρ0 ∂y
(10.17b)
The (u⋅∇)u and (u⋅∇)v terms in (10.17) are non linear terms that represent the advection of momentum by the flow. There is a variety of numerical schemes to calculate these terms in finite differences, and we illustrate here the impact they can have on the solution of a numerical model. We are comparing the effects of three different 2nd order advection schemes on the solution of an eddy permitting (1/4°) model that uses the NEMO OGCM
250
B. Barnier et al.
(Madec 2008). The schemes (presented in detail in (Le Sommer et€al. 2009)) have different mathematical formulations that are shortly described hereafter: • The EFX scheme: written in the form of a divergence of a flux, this scheme is energy conserving. • The ENS scheme: written in the form of a gradient of kinetic energy and a vorticity term, this scheme is enstrophy conserving. • The EEN scheme: also written in the form of a gradient of kinetic energy and a vorticity term, it uses a larger stencil than ENS. This scheme is energy and enstrophy conserving. Sensitivity experiments to these advection schemes have been performed with the Drakkar model configurations (based on the NEMO OGCM (Madec 2008), see Sect.€10.4). Coarse-resolution (laminar) configurations (2° or 1°) showed a weak sensitivity to the choice of the scheme, as expected from the minor contribution of (u⋅∇)u terms at large scales (very small Rossby number). However, the eddy permitting configuration (1/4°) proved to be very sensitive to this choice, as illustrated in Fig.€10.3. The simulated mean circulation is significantly modified in pattern and amplitude in regions of strong currents. The Gulf Stream for example is significantly shifted southward with the EEN scheme compared to the ENS scheme (Fig.€10.3). As demonstrated by (Le Sommer et€al. 2009), compared to the two other schemes, the EEN scheme was found to reduce the noise in the vertical velocity field near the bottom cells. Enhanced continuity of the mean currents and enhanced topographic rectification effects were also diagnosed with the EEN scheme. This might have contributed to improved western boundary currents and to the significant reduction of the inertial eddy at Cape Hatteras (see also Barnier et€al. 2006; Penduff et€al. 2007). The momentum advection scheme was also found to impact the trajectory of Agulhas Rings in the Benguela Basin (Barnier et€al. 2006). These trajectories tend to be spuriously straight, deterministic and invariable in several eddying models, including the POP model (1/10°) and the 1/4° Drakkar model with ENS. The use of the EEN scheme in the 1/4° Drakkar model substantially reduced this widely-found inconsistency, yielding much more realistic (i.e. more chaotic and irregular) Ring shedding events and trajectories in the South Atlantic, as shown by the patterns of eddy kinetic energy (Fig.€10.4). Other examples (Barnier et€al. 2006; Penduff et€al. 2007) confirm that the use of advanced numerical schemes (such as the use of a partial step representation of topography in z-coordinate models) may improve model solutions in a more drastic way than an increase in resolution.
10.4â•…Impact of Resolution on Model Solution In this part of the course, we use the Drakkar hierarchy of global ocean circulation models that spans resolutions from 2 to 1/12° to illustrate how the changes in resolution impact the realism of model simulations.
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
251
Fig. 10.3↜渀 Difference in the mean barotropic streamfunction between two simulations performed with the ORCA025 (1/4°) Drakkar model. One simulation is using the EEN momentum advection scheme and its twin is using the ENS momentum advection scheme (the difference EEN minus ENS is shown). Large differences (i.e. greater than ±10€Sv) are only found in regions of strong nonlinear currents
10.4.1 DRAKKAR Hierarchy of Model Configurations Drakkar is a cooperation that gathers the resources and expertise of several research and operational oceanography groups in Europe with the objective to develop, share and improve a hierarchy of global ocean/sea-ice model configurations that can be used for research and operational applications (Drakkar Group 2007). Drakkar uses the NEMO modelling system (Madec 2008)1 and the AGRIF grid refinement soft-
1╇ NEMO includes an ocean model, a sea-ice model, and a module simulating the evolution of geochemical passive tracers (i.e. 14C, CFC11, SF6).
252
B. Barnier et al.
Fig. 10.4↜渀 Mean surface eddy kinetic energy (↜eke in cm2 s−2) around South Africa, (a) as observed by satellite altimetry (Ducet et€al. 2000), and as simulated by (b) the global ORCA025 Drakkar model with the EEN scheme, (c) the global POP1/10° global model, and (d) the global ORCA025 Drakkar model with the ENS scheme. All model results show velocity variances computed over 3 years
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
253
Fig. 10.5↜渀 Tripolar grid of the DRAKKAR ORCA025 configuration (1/4° resolution at the equator) with 1 point every 12 points plotted (in total 1,442â•›×â•›1,021 grid points). This eddy permitting configuration is used by MERCATOR-Ocean for operational forecasts
ware (Debreu et€al. 2008)2. Drakkar also contributes to the continuous development of NEMO. Drakkar has implemented a hierarchy of global and regional NEMO configurations using the tripolar ORCA grid (Madec and Imbard 1996), (Fig.€10.5). Global simulations have been performed at 2, 1, 1/2, 1/4, and 1/12° horizontal resolution. Every configuration uses domain decomposition (up to ~1,000 processors) to run on massively parallel computers. The main characteristics of the model hierarchy are summarized in Table€10.1. A detailed description of the 1/4° ORCA025 configuration and of the model hierarchy may be found in (Barnier et€al. 2006) and (Penduff et€al. 2010), respectively. The bulk formulae used in the forcing function are those proposed by (Large and Yeager 2004). The atmospheric forcing fields used in the Drakkar simulations come from the CORE data set (Large and Yeager 2008), and from the Drakkar Forcing Sets DFS3 or DFS4 (Brodeau et€al. 2010). The DFS forcing uses ERA40 6€hourly surface atmospheric variables to calculate the turbulent fluxes (wind stress, latent and sensible heat fluxes, evaporation), daily satellite radiation fluxes (for downward short wave and long wave) and monthly precipitation from satellite estimates. The various corrections applied to these data are described in (Brodeau et€ al. 2010). Note that recent developments introduced the diurnal cycle of solar radiation and the contribution of the ocean biology to the depth-dependant absorption of light. Most Drakkar simulations cover the period 1958–2004 (Drakkar Group 2007). 2╇ See (Biastoch et€al. 2008) for an example on application of AGRIF in the Agulhas retroflection region, (Chanut et€al. 2008) for an application in the Labrador Sea, or (Jouanno et€al. 2008) in the Caribbean Seas.
254
B. Barnier et al.
Table 10.1↜渀 Main settings of the ocean component of the Drakkar hierarchy of global model configurations. The setting of the LIM2 sea ice model (Fichefet et€al. 1997) is not described here
Most simulations used in this course are from the G70 series, in which the 2, 1, 1/2 and 1/4° Drakkar models have been driven with the DFS3 forcing over the 50 year period 1958–2007. These simulations are compared to 2 observational references: the in-situ ENACT-ENSEMBLES hydrographic database (EN3-v2a, Ingleby and Huddleston 2007), and the AVISO altimetric (SLA) database. For that purpose, a collocation algorithm based on a quadrilinear interpolation scheme in space and time subsamples the model outputs in the exact same way as the observations; dedicated metrics are then used to compare observed and simulated collocated databases (see Fig.€10.6).
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
255
Fig. 10.6↜渀 Vertical structure of model temperature (↜top) and salinity (↜bottom) biases (relative to the period 2000–2004) in the ORCA025 Drakkar simulation driven with the DFS3 forcing (run series G70). The reference is the EN3-v2a hydrographic data set. In colour are the PDFs (↜in log scale) of temperature and salinity biases (↜x-axis) as a function of depth (↜y-axis). The median (↜green line) and the mode (↜white line) of these depth-dependent PDFs are superimposed (M. Juza et€al. 2011, personal communication)
10.4.2 Some Impacts of Resolution Increase The Drakkar hierarchy of models proved very useful to assess the impact of grid resolution on the representation of climate-relevant ocean circulation features. Figure€ 10.1a shows an altimeter observation of the global sea-level anomaly (SLA) averaged over a week in May 2004, with a strong signature of mesoscale eddies. These mesoscale features are obviously absent at 2° (Fig.€10.1c), but are clearly visible and exhibit realistic patterns at 1/4° (Fig.€10.1b): from this mesoscale perspective, laminar and eddying ocean models do not simulate the “same” ocean. It is more interesting and relevant to compare these two classes of models with a focus on relatively large-scale features that are captured by both, such as basin-scale integrated climate indices, spatially-smoothed mean horizontal circulation, and interannual variability (LSIV) patterns.
256
B. Barnier et al.
Fig. 10.7↜渀 Mean meridional overturning streamfunction in the Atlantic obtained in 4 Drakkar simulations at increasing resolutions (2, 1, 1/2 and 1/4°), driven with the same DFS3 forcing (run series G70). Contour interval is 2€Sv (Lecointre 2009)
The zonally-averaged meridional overturning circulation (MOC) and meridional heat transport (MHT) exhibit certain sensitivity to resolution changes. While the meridional structure of the Atlantic MOC is barely changed from 2 to 1/4° (Fig.€10.7), the mean amplitudes of the MOC and MHT increase by about 25% (Table€10.2). However, the low-frequency variability of the MOC (Fig.€10.8) is remarkably similar among all simulations: although other climatic indices may differ significantly, this result shows that eddying models might not yield major changes in the simulation of the slow evolution of the MOC and MHT.
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications Table 10.2↜渀 Atlantic MOC and MHT estimated at 26°N by the various models of the Drakkar hierarchy, all driven by the same DFS3 forcing
Resolution 2° model 1° model 1/2° model 1/4° model
Atlantic MOC (26°N) 13€Sv 16€Sv 17€Sv 17€Sv
257
Atlantic MHT (26°N) 0.68€PW 0.73€PW 0.80€PW 0.88€PW
Fig. 10.8↜渀 Monthly mean variations of the Atlantic MOC (in Sv) at 26°N between 1958 and 2004 in 4 Drakkar simulations of increasing resolution (2, 1, 1/2 and 1/4°). All simulations are driven by the same DFS3 forcing (run series G70). The value of the AMOC is the value at 1,000€m of the overturning streamfunction shown in Fig.€10.7. The MERA curve comes from a regional model of the North Atlantic at 1/3° resolution (Lecointre 2009)
Figure€ 10.9 compares the time-averaged, vertically-integrated (barotropic) streamfunction simulated by a 2° model and a 1/12° model. The solution of the 1/12° model has been smoothed and plotted onto the 2° grid. Both models roughly agree on the location of large scale gyres and the mean currents, but many differences can be seen in the horizontal circulation, e.g. the structure and extent of the subpolar gyre of the North Atlantic, the Confluence in the western South Atlantic, or the frontal structure of the Antarctic Circumpolar Current. Clearly, resolution significantly improves the realism of the simulated western boundary currents, the location of permanent fronts, and the amplitude of current velocities and transports. Since many of these improvements appear in regions of atmospheric cyclogenesis, one may expect significant (and potentially beneficial) changes between ocean/atmosphere coupled simulations using coarse and eddying ocean models. Penduff et€al. (2010) low-pass filtered observed and Drakkar simulations timedependant SLA fields over 1993–2004 to compare model skills in terms of large-
258
B. Barnier et al.
Fig. 10.9↜渀 Mean barotropic transport streamfunction simulated by (a) the 2° resolution model ORCA2, and (b) the 1/12° resolution model ORCA12 (contour interval of 20€Sv). For this comparison the 1/12° solution has been smoothed with a 100 pass of a Hanning filter and plotted on the 2° grid
scale interannual variability (LSIV), i.e. at scales larger than about 6° and timescales longer than 1.5 year. Successive increases in model resolution from 2 to 1/4° were shown to yield systematic improvements in LSIV features, in particular stronger interannual variability, and systematic improvements in its geographical patterns. Again, this suggests that the (partial) resolution of mesoscale features yields more accurate eddy fluxes than mesoscale parameterizations, not only regarding the mean state but also its low-frequency variability. While basin-integrated quantities are moderately sensitive to resolution changes in this setup, their spatial and temporal patterns (hence their underlying dynamical origin) are strongly improved when the grid size decreases.
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
259
10.5â•…Conclusion In this course, we have used satellite observations to illustrate the ubiquity of mesoscale variability in the ocean, in every basin and at all latitudes. We have provided a dynamical definition of the ocean mesoscale based on scaling arguments that links the mesoscale to the general circulation and provides a characteristic scale of motion. We listed the mesoscale processes that may have important consequences for the ocean general circulation and climate. On the modelling side, two ways of dealing with the eddy problem were presented. One is to resolve eddies (partially to fully) with computationally expensive fine grids, another is to parameterise eddies on coarse but computationally efficient grids. This brought up the concept of resolved/ unresolved scales, an ocean model providing a solution for the scales it resolves that depends on the representation of the unresolved scales. We have explained that the representation of the unresolved scales requires a “closure hypothesis” based on a physical ground which is very different in laminar and eddying models. Both types of model solve the same equations: similar numerical schemes may be used for both. However, examples illustrate different sensitivities of model solutions to numerical schemes depending on resolution: as expected, high-resolution models show greater sensitivity to the numerical schemes used to solve non-linear terms (e.g. the momentum advection scheme). Therefore, ways of model development should strongly associate resolution increase with improvement of numerical schemes and parameterisation of subgrid scales. Although they solve for the same equations, eddying and laminar models do not simulate the “same” ocean, for physical and numerical reasons. An example was shown where the coarse resolution solution and the spatially-smoothed high resolution solution are not equivalent (i.e. parameterisations are still not fully representing the unresolved scales). Increased resolution lets mesoscale turbulence develop hence improves the consistency of resolved physics and the realism of model solutions, especially the path of strong currents, their link with topography, the amplitude of current velocities, and the main features of interannual variability. However, some climate-relevant integrated quantities, such as the AMOC and the MHT seem relatively less sensitive to resolution (e.g. the AMOC mean pattern and its interannual variability). However, since air-sea interactions are localized, these results suggest that eddying ocean models should contribute to improve the physical consistency of future climate prediction systems. To conclude this course, one should emphasise that (mesoscale) eddy-resolution modelling is still in its infancy. Today’s “eddy-resolving” global model resolutions reach 1/12° at the equator, but very few are used routinely today (most have coarser grids, i.e. up to about 1/4–1/10°). More generally, the sensitivity of eddying global models to forcing, parameters or numerical schemes remains largely unknown, and various research groups are still working on the parameterisation of unresolved (sub-mesoscale) features. Practically, the computational cost and storage requirements of eddying global models are large (even for present super-computers): a challenge for the next 10 years might be to carry out the tran-
260
B. Barnier et al.
sition from O(1/4–1/10°) to O(1/12–1/16°) routine climate-oriented large-scale simulations. Because of computer limitations, climate models used in 10000-year paleo-climatological hindcasts use (laminar) ocean components at coarser resolutions than those used for 100-year IPCC-like predictions. In turn, the resolution of these coupled ocean models cannot be as fine as those presently developed in Drakkar-like ocean-only eddying setup, and in operational models that are being presently used at much higher resolution, (e.g. 1/32° or more) on regional domains. Coarse-resolution models, on one hand, continuously benefit from parameterisations developed from high-resolution model. On the other hand, these models are efficient tools to improve certain eddying ocean model components (e.g. atmospheric forcing), and coupled ocean/atmosphere models provide eddying ocean modellers with essential feedbacks on air-sea interactions. In conclusion, “laminar”, “eddy-permitting”, and “eddy-resolving” ocean models require coordinated development efforts since ocean modellers need a large range of tools depending on applications. Acknowledgments╇ We acknowledge the continuous support provided to the MEOM ocean modelling group in Grenoble by CNRS and CNES, and the very important support in supercomputing provided by GENCI (IDRIS and CINES). Support for this course has been provided by GMMC. Bernard Barnier is very grateful to Clothilde Langlais who prepared and taught the practical work on the GM90 parameterisation associated to this course, and to Tim Pugh who set the computer facilities for the practical work so efficiently. We would like to thank Jean Marc Molines, Mélanie Juza and Albanne Lecointre who provided important material for the course, and Anne Marie Treguier and Gurvan Madec, our partners in the Drakkar coordination.
Appendix Formulation of the Primitive Equations in the usual Cartesian coordinate system (x, y, z) used in the course.
Definitions T potential temperature S Salinity density uâ•›=â•›(↜u, v, w) velocity vector P pressure f Coriolis parameter g gravitational acceleration DT,S,u,v diffusion/dissipation terms FT,S,u,v forcing terms (↜x,y,z) coordinate system (eastward, northward, upward) ∇ = (∂/∂x, ∂/∂y, ∂/∂z) gradient vector operator
10â•… Eddying vs. Laminar Ocean Circulation Models and Their Applications
261
Equations Zonal momentum
∂u 1 ∂P + (u · ∇)u − fv = − + Du + Fu ∂t ρ0 ∂x
Meridional momentum
∂v 1 ∂P + (u · ∇)v + fu = − + Dv + Fv ∂t ρ0 ∂y
Temperature
∂T + u · ∇T = DT + F T ∂t
Salinity
∂S + u · ∇S = DS + FS ∂t
Hydrostatic approximation
∂P = −ρg ∂z
Non divergence of the velocity vector u = (u, v, w)
∇ ·u = 0
Nonlinear equation of state
ρ = ρ(T , S, P )
References Barnier B, Madec G, Penduff T, Molines J-M, Treguier AM, Le Sommer J, Beckmann A, Biastoch A, Böning C, Dengg J, Derval C, Durand E, Gulev S, Remy E, Talandier C, Theetten S, Maltrud M, McClean J, De Cuevas B (2006) Impact of partial steps and momentum advection schemes in a global ocean circulation model at eddy permitting resolution. Ocean Dyn. doi:10.1007/s10236-006-0082-1 Biastoch A, Böning CW, Lutjerharms JRE (2008) Agulhas leakage dynamics affects decadal variability in Atlantic overturning circulation. Nature. doi:10.1038/nature07426 Brodeau L, Barnier B, Penduff T, Treguier AM, Gulev S (2010) An ERA40 based atmospheric forcing for global ocean circulation models. Ocean Model 31:88–104 Chanut J, Barnier B, Large W, Debreu L, Penduff T, Molines JM, Mathiot P (2008) Mesoscale eddies in the Labrador Sea and their contribution to convection and restratification. J Phys Oceanogr 38:1617–1643 Chelton DB, Schlax MG, Samelson RM, de Szoeke R (2007) Global observations of large oceanic eddies. Geophys Res Lett. doi:10.1029/2007 GL030812 Debreu L, Vouland C, Blayo E (2008) AGRIF: Adaptive Grid Refinement in Fortran, Comput Geosci 34(1):8–13 Ducet N, Le Traon PY, Reverdin G (2000) Global high resolution mapping of ocean circulation from Topex/Poseidon and ERS-1 and -2. J Geophys Res-Ocean 105(C8):19477–19498 DRAKKAR Group (2007) Eddy-permitting ocean circulation hindcasts of past decades. CLIVAR Exch No 42 12(3):8–10 Eden C, Greatbatch RJ (2008) Towards a mesoscale eddy closure. Ocean Model 20:223–239 Fichefet T, Morales Maqueda MA (1997) Sensitivity of a global sea ice model to the treatment of ice thermodynamics and dynamics. J Geophys Res 102:12609–12646 Frisch U, Kurien S, Pandit R, Pauls W, Ray S, Wirth A, Zhu J-Z (2008) Hyperviscosity, Galerkin truncation, and bottlenecks in turbulence. Phys Rev Lett 101:144501
262
B. Barnier et al.
Gent PR, McWillams JC (1990) Isopycnal mixing in ocean circulation models. J Phys Oceanogr 20:150–155 Griffies SM (2004) Fundamentals of ocean climate models. Princeton University Press, Princeton (518+xxxivpages) Griffies SM, Adcroft AJ (2008) Formulating the equations of an ocean model. In: Hecht MW, Hasumi H (eds) Ocean modeling in an eddying regime, Geophysical monograph 177. American Geophysical Union, Washington, pp€281–318 Griffies SM, Gnanadesikan A, Dixon KW, Dunne JP, Gerdes R, Harrison MJ, Rosati A, Russell JL, Samuels BL, Spelman MJ, Winton M, Zhang R (2005) Formulation of an ocean model for global climate simulations. Ocean Sci 1:45–79 Ingleby B, Huddleston M (2007) Quality control of ocean temperature and salinity profiles from historical and real-time data. J Mar Syst. doi:10.1016/j.jmarsys.2005.11.019 Jouanno J, Sheinbaum J, Barnier B, Molines JM, Debreu L, Lemarié F (2008) The mesoscale variability in the Caribbean Sea. Part I: simulations with an embedded model and characteristics. Ocean Model 23:82–101 Juza M, Penduff T, Barnier B, Brankart M (2011) Estimating the distortion of mixed layer property distributions induced by the ARGO sampling. J Oper Oceanogr (submitted) Large WG (1998) Modeling and parameterizing the ocean planetary boundary layer. In: Chassignet EP, Verron J (eds) Ocean modeling and parameterization. Kluwer, Netherlands, pp€81–120 Large WG, Yeager SG (2004) Diurnal to decadal global forcing for ocean and sea-ice models: the data sets and flux climatologies. NCAR technical note: NCAR/TN-460+STR. CGD division of the National Center for Atmospheric Research Large WG, Yeager SG (2008) The global climatology of an interannually varying air-sea flux data set. Clim Dyn. doi:10.1007/s00382-008-0441-3 Large WG, McWilliams JC, Doney SC (1994) Oceanic vertical mixing: a review and a model with a non local boundary layer parameterization. Rev Geophys 32:363–403 Lecointre A (2009) Variabilité océanique interannuelle dans l’océan Atlantique Nord: simulation et observabilité. Thèse de l’Université Joseph Fourier, Grenoble. http://tel.archives-ouvertes. fr/tel-00470520/ Lesieur M (2008) Turbulence in fluids, 4th Edn. FMIA 84, R. Moreau Series Ed. Springer, Dordrecht. ISBN 978-1-4020-6434-0 Le Sommer J, Penduff T, Theetten S, Madec G, Barnier B (2009) How momentum advection schemes influence current-topography interactions at eddy permitting resolution. Ocean Model 29:1–14 Madec G (2008) NEMO, the ocean engine, Notes de l’IPSL, Universit_e P. et M. Curie, B102 T15-E5, 4 place Jussieu, Paris cedex 5 Madec G, Imbard M (1996) A global ocean mesh to overcome the North Pole singularity. Clim Dyn 12:381–388 McWilliams JC (2008) The nature and consequences of oceanic eddies. In: Hecht MW, Hasumi H (eds) Ocean modeling in an eddying regime, Geophysical monograph 177. American Geophysical Union, Washington, pp€5–15 Müller P (2006) The equations of oceanic motions. Cambridge University Press, Cambridge, pp€291 Penduff T, Sommer JL, Barnier B, Treguier A-M, Molines J-M, Madec G (2007) Influence of numerical schemes on current-topography interactions in 1/4° global ocean simulations. Ocean Sci 3(4):451–535 Penduff T, Juza M, Brodeau L, Smith GC, Barnier B, Molines J-M, Treguier A-M, Madec G (2010) Impact of model resolution on sea-level variability with emphasis on interannual time scales. Ocean Sci 6:269–284 Redi MH (1982) Oceanic isopycnal mixing by coordinate rotation. J Phys Oceanogr 12:1154–1158 Treguier AM (2006) Models of the ocean: which ocean? In: Chassignet EP, Verron J (Eds) Ocean weather forecasting. Springer, Dordrecht, pp€75–108 Zhao R, Vallis G (2008) Parameterizing mesoscale eddies with residual and Eulerian schemes, and a comparison with eddy-permitting models. Ocean Model 23:1–12
Chapter 11
Isopycnic and Hybrid Ocean Modeling in the Context of GODAE Eric P. Chassignet
Abstract╇ An ocean forecasting system has three essential components (observations, data assimilation, numerical model). Observational data, via data assimilation, form the basis of an accurate model forecast; the quality of the ocean forecast will depend primarily on the ability of the ocean numerical model to faithfully represent the ocean physics and dynamics. Even the use of an infinite amount of data to constrain the initial conditions will not necessarily improve the forecast against persistence of a poorly performing ocean numerical model. In this chapter, some of the challenges associated with global ocean modeling are introduced and the current state of numerical models formulated in isopycnic and hybrid vertical coordinates is reviewed within the context of operational global ocean prediction systems.
11.1â•…Introduction The main purpose of this chapter is to review the current state of numerical models formulated in isopycnic and hybrid vertical coordinates and discuss their applications within the context of operational global ocean prediction systems and GODAE (Global Ocean Data Assimilation Experiment). In addition to this author’s work, this review chapter relies heavily on articles, notes, and review papers by R. Bleck, S. Griffies, A. Adcroft, and R. Hallberg. Appropriate references will be made, but it is inevitable that some similarities in content and style to these publications will be present throughout this chapter. As stated in Bleck and Chassignet (1994), numerical modeling of geophysical fluid started half a century ago with numerical weather prediction. Ocean model development lagged behind that of atmospheric models, primarily because of E. P. Chassignet () Center for Ocean-Atmospheric Predictions Studies, Florida State University, Tallahassee, FL 32306, USA e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_11, ©Â€Springer Science+Business Media B.V. 2011
263
264
E. P. Chassignet
the societal needs for meteorological forecasts, but also because of the inherently greater complexity of circulation systems in closed basins and a nonlinear equation of state for seawater. Furthermore, the computing power required to resolve the relevant physical processes (such as baroclinic instabilities) is far greater for the ocean than the atmosphere since these processes occur on much smaller scale in the ocean. Historically, ocean models have been used primarily to numerically simulate the dominant space-time scales that characterize the ocean system. Simulations of physical integrity require an ability to both accurately represent the various phenomena that are resolved, and to parameterize those scales of variability that are not resolved (Chassignet and Verron 1998). For example, the representation of transport falls into the class of problems addressed by numerical advection schemes, whereas the parameterization of subgrid scale transport is linked to turbulence closure considerations. Although there are often areas of overlap between representation and parameterization, the distinction is useful to make and generally lies at the heart of various model development issues. Before the Navier-Stokes differential equations can be solved numerically, they must be converted into an algebraic system, a conversion process that entails numerous approximations. Numerical modelers strive to achieve numerical accuracy. Otherwise, the discretization or “truncation” error introduced when approximating differentials by finite differences or Galerkin methods becomes detrimental to the numerical realization. Sources for truncation errors are plentiful, and many of these errors depend strongly on model resolution. Examples include horizontal coordinates (spherical and/or generalized orthogonal), vertical and horizontal grids, timestepping schemes, representation of the surface and bottom boundary layers, bottom topography representation, equation of state, tracer and momentum transport, subgrid scale processes, viscosity, and diffusivity. Numerical models have improved over the years not only because of better physical understanding, but also because modern computers permit a more faithful representation of the differential equations by their algebraic analogs. A key characteristic of rotating and stratified fluids, such as the ocean, is the dominance of lateral over vertical transport. Hence, it is traditional in ocean modeling to orient the two horizontal coordinates orthogonal to the local vertical direction as determined by gravity. The more difficult choice is how to specify the vertical coordinate. Indeed, as noted by various ocean modeling studies such as DYNAMO (Meincke et€al. 2001; Willebrand et€al. 2001) and DAMEE-NAB (Chassignet and Malanotte-Rizzoli 2000), the choice of a vertical coordinate system is the single most important aspect of an ocean model’s design. The practical issues of representation and parameterization are often directly linked to the vertical coordinate choice. Currently, there are three main vertical coordinates in use, none of which provide universal utility. Hence, many developers have been motivated to pursue research into hybrid approaches. As outlined by Griffies et€al. (2000a), there are three regimes of the ocean that need to be considered when choosing an appropriate vertical coordinate. First, there is the surface mixed layer. This region is generally turbulent and dominated by transfers of momentum, heat, freshwater, and tracers. It is typically very well mixed
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
265
in the vertical through three-dimensional convective/turbulent processes. These processes involve non-hydrostatic physics, which requires very high horizontal and vertical resolution to be explicitly represented (i.e., a vertical to horizontal grid aspect ratio near unity). A parameterization of these processes is therefore necessary in primitive equation ocean models. In contrast, tracer transport processes in the ocean interior predominantly occur along constant density directions (more precisely, along neutral directions). Therefore, water mass properties in the interior tend to be preserved over large space and time scales (e.g., basin and decadal scales). Finally, there are several regions where density driven currents (overflows) and turbulent bottom boundary layer processes act as a strong determinant of water mass characteristics. Many such processes are crucial for the formation of deep water properties in the world ocean. The simplest choice of vertical coordinate (Fig.€11.1) is z, which represents the vertical distance from a resting ocean surface. Another choice for vertical coordinate is the potential density referenced to a given pressure. In a stably stratified adiabatic ocean, potential density is materially conserved and defines a monotonic layering of the ocean fluid. A third choice is the terrain-following σ coordinate. The depth or z coordinate provides the simplest and most established framework for ocean modeling. It is especially well-suited for situations with strong vertical/diapycnal mixing and/or low stratification, but has difficulty in accurately representing the ocean interior and bottom. The density coordinate, on the other hand, is well-suited to modeling the observed tendency for tracer transport to be along density (neutral) directions, but is inappropriate in unstratified regions. The σ coordinate provides a suitable framework in situations where capturing the dynamical and/or boundary layer effect associated with topography is important. Terrain-following σ coordinates are particularly well suited for modeling flows over the continental shelf, but remain unproven in a global modeling context. They have been used extensively for
Fig. 11.1↜渀 Schematic of an ocean basin illustrating the three regimes of the ocean germane to the considerations of an appropriate vertical coordinate. The surface mixed layer is naturally represented using fixed depth z (or pressure p) coordinates, the interior is naturally represented using isopycnic ρpot (potential density tracking) coordinates, and the bottom boundary is naturally represented using terrain-following σ coordinates. (Adapted from Griffies et€al. 2000a)
266
E. P. Chassignet
coastal engineering applications and prediction (see Greatbatch and Mellor (1999) for a review), as well as for regional and basin-wide studies. Ideally, an ocean model should retain its water mass characteristics for centuries of integration (a characteristic of density coordinates), have high vertical resolution in the surface mixed layer for proper representation of thermodynamical and biochemical processes (a characteristic of z coordinates), maintain sufficient vertical resolution in unstratified or weakly stratified regions of the ocean, and have high vertical resolution in coastal regions (a characteristic of terrain-following σ coordinates). This has led to the recent development of several hybrid vertical coordinate numerical models that combine the advantages of the different types of vertical coordinates in optimally simulating coastal and open-ocean circulation features. Within the GODAE context, the global ocean models presently used or tested for ocean forecasting systems can be divided into two categories: fixed coordinates (MOM, NEMO, MITgcm, NCOM, POP, OCCAM, …) or primarily Lagrangian coordinates (NLOM, MICOM, HYCOM, POSEIDON, GOLD, …). The reader is referred to the Appendix for a definition of the acronyms and references.
11.2╅Ocean Model Requirements for GODAE The specific objectives of GODAE are to: a) Apply state-of-the art ocean models and assimilation methods to produce short-range open-ocean forecasts, boundary conditions to extend predictability of coastal and regional subsystems, and initial conditions for climate forecast models. b) Provide global ocean analyses for developing improved understanding of the oceans and improved assessments of the predictability of ocean variability, and for serving as a basis for improving the design and effectiveness of a global ocean observing system. The requirements for the ocean model differ among these objectives. High-resolution operational oceanography requires accurate depiction of mesoscale features, such as eddies and meandering fronts and of upper ocean structure. Coastal applications require accurate sea level forced by wind, tidal forces, and surface pressure. Seasonal-to-interannual forecasts require a good representation of the upper ocean mass field and coupling to an atmosphere. This diversity of applications implies that no single model configuration will be sufficiently flexible to satisfy all the objectives. For high-resolution operational oceanography (see Hurlburt et€al. (2008, 2009) for a review), the models have to be global and eddy-resolving, with high vertical resolution and advanced upper-ocean physics, and use high-performance numerical code and algorithms. To have a good representation of the mesoscale variability, the horizontal grid spacing must be fine enough to be able to resolve baroclinic insta-
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
267
bility processes. Most numerical simulations to date suggest that a minimum grid spacing on the order of 1/10° (see AGU monograph by Hecht and Hasumi 2008 for a review) is needed for a good representation of western boundary currents (including their separation from the coast) and of the eddy kinetic energy. The computational requirements for global ocean modeling at this resolution are extreme and demand the latest in high-performance computing. For that reason, there are only a few eddy-resolving global ocean models currently being integrated with or without data assimilation: NLOM 1/32° (Shriver et€al. 2007), POP 1/10° (Maltrud and McClean 2005), HYCOM 1/12° (Chassignet et€ al. 2009), and MERCATOR/NEMO 1/12° (Bourdallé-Badie and Drillet personal communication).
11.3â•…Challenges As the mesh is refined, ocean models faceseveral challenges. This section summarizes the challenges that this author thinks are most relevant to GODAE’s goal of high-resolution operational oceanography. Model-Related Data Assimilation Issues╇ In data assimilation, there is a much larger burden on ocean models than on atmospheric models because (1) synoptic oceanic data is overwhelmingly at the surface, (2) ocean models must use simulation skills in converting atmospheric forcing into an oceanic response, and (3) ocean model forecast skill is needed in the dynamical interpolation of satellite altimeter data (since the average age of the most recent altimeter data on the repeat tracks is 1/2 the repeat cycle plus the delay in receiving the real-time data, typically 1–3 days at present). Specifically, the model must be able to accurately represent ocean features and fields that are inadequately observed or constrained by ocean data. This is an issue for re-analyses, for real-time mesoscale resolving nowcasts and short-range forecasts (up to ~1 month), and for seasonal-to-interannual forecasts, including the geographical distribution of anomalies. Ocean simulation skill is especially important for mean currents and their transports (including flow through straits), the surface mixed layer depth, Ekman surface currents, the coastal ocean circulation, the Arctic circulation, and the deep circulation (including the components driven by eddies, the thermohaline circulation, and the wind). To order to assimilate the SSH (Sea Surface Height) anomalies determined from satellite altimeter data into the numerical model, it is necessary to know the oceanic mean SSH over the time period of the altimeter observations. Unfortunately, the earth’s geoid is not presently known with sufficient accuracy to provide an accurate mean SSH on scales important for the mesoscale. Several satellite missions are underway or planned to help determine a more accurate geoid, but not on a fine enough scale to entirely meet the needs of mesoscale prediction. Thus, it is of the utmost importance to have a model mean that is reasonably accurate since most oceanic fronts and mean ocean current pathways cannot be sharply defined from hydrographic climatologies alone.
268
E. P. Chassignet
A number of additional issues, theoretical or technical, are raised when the numerical ocean model is used in conjunction with data assimilation techniques. In all data assimilation methods, nonlinearities are a major source of sub-optimality. Variational methods often require development of the adjoint model, which is a demanding task. Depending on the vertical coordinates, difficulties arise in dealing with non-Gaussian statistics in isopycnic coordinate models with vanishing layers, or with convective instability processes throughout the vertical columns in z coordinate models. Finally, defining prior guess errors, model errors, and, to a lesser degree, observation errors, is difficult. Forcing╇ The ocean model will respond to the prescribed atmospheric forcing fields. The present models’ inability to reproduce the present-day ocean circulation when run in free mode is a consequence of inaccuracies in both the forcing and in the numerical models themselves, as well as of the intrinsic nonlinearity of the Navier-Stokes equations. Accurate atmospheric forcing, when computed using bulk formulas that combine the model SST and the atmospheric data, have been shown to be essential for a successful forecast of the sea surface temperature, sea surface salinity, and mixed layer depths. It is important to mention here that the prescription of the surface forcing fields, as currently done in many ocean forecasting systems, does not allow for atmospheric feedback. This may have a limited impact on a 15-day forecast, but coupling to an atmospheric model is essential in seasonal-to-interannual forecasting of events such as ENSO (Philander 1990; Clarke 2008). Topography╇ With high-resolution modeling comes the need for high-resolution topography. The most commonly used global bathymetric database is the Smith and Sandwell (1997, 2004) database, which is derived from a combination of satellite altimeter data and shipboard soundings. The latest version (http://topex.ucsd. edu/WWW_html/srtm30_plus.html) is at 1/2€min resolution and covers the entire globe, with patches from the IBCAO topography (Jakobsson et al. 2000) in the Arctic and from various high-resolution sounding data where such data are available. Most, but not all, of the other available global bathymetric data sets, for instance, the latest GEBCO bathymetry, ETOPO2, DBDB2, and so on, utilize the Smith and Sandwell database in the deep ocean. Differences can often be found among various bathymetry products in shallow water, where satellite altimetry is much less useful, and where local high-quality datasets are often used. While modern acoustic sounding data can achieve lateral resolutions of about 100€m, such data cover only a small fraction of the open ocean. In areas not covered by such data, the true feature resolution of the Smith and Sandwell datasets is approximately given by π times the water column depth, i.e., about 10–20€km. Goff and Arbic (2010) have recently created a synthetic data set in which the topographic anomalies depend on local geophysical conditions such as seafloor spreading rate. The synthetic topography can be overlaid on the Smith and Sandwell datasets to create global bathymetries that have the right statistical texture (roughness), even if the “bumps” are not deterministically correct.
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
269
Meridional Overturning Circulation╇ A good representation of the overturning circulation is essential for a proper representation of the oceanic surface fields. This is especially true in the North Atlantic where the contribution of the thermohaline meridional overturning circulation accounts for a significant portion of the Gulf Stream transport. Many factors, such as mixed layer physics, ice formation, overflow representation, and interior diapycnal mixing, affect the strength and pathways of the meridional overturning circulation. Ice Models╇ A global ocean model needs to be coupled to an ice model to have the proper forcing at high latitudes and hence the correct dense water mass formation and circulation. A good representation of the ice cycle is challenging, especially when the atmospheric fields are prescribed. Another related issue is the mixed layer parameterization below the ice. Overflows╇ Sill overflows typically involve passages through the ridge and are under the control of hydraulic effects, each of which is highly dependent on topographic details. The downslope flow of dense water, typically in thin turbulent layers near the bottom, may strongly entrain ambient waters and is modulated by mesoscale eddies generated near the sill. The simulation of downslope flows of dense water differs strongly among ocean models based on different vertical coordinate schemes. In z coordinate models, difficulties arise from the stepwise discretization of topography, which tends to produce gravitationally unstable water parcels that rapidly mix with the ambient fluid as they flow down the slope. The result is a strong numerically induced mixing of the outflow water downstream of the sill. This numerically induced mixing will in principle decrease as the horizontal and vertical grid spacing is refined. It is, however, still an issue at the above mentioned resolution of 1/10° (see review article by Legg et€al. 2009). Diapycnal Mixing╇ This observational field is the least well known and the most difficult to model correctly, especially in fixed coordinate models (Griffies et€ al. 2000b; Lee et€al. 2002) due to the typically small levels of mixing in the ocean interior away from boundaries (Ledwell et€al. 1993). Excessive numerically induced diapycnal mixing will lead to incorrect water mass pathways and a poor representation of the thermohaline circulation. Internal Gravity Waves/Tides╇ Improperly resolved internal gravity waves generate numerically induced diapycnal mixing in fixed-coordinate models. Several numerical techniques can be used to slow the gravity waves, but ultimately it would be desirable to have a diapycnal mixing parameterization based on the model representation of internal gravity waves. The inclusion of astronomical tidal forcing in ocean models generates barotropic tides, which in turn generate internal tides in areas of rough topography. Until recently, global modeling of the oceanic general circulation and of tides have been separate endeavors. A first attempt to model the global general circulation and tides simultaneously and at high horizontal resolution is described in Arbic et€al. (2010). In contrast to earlier models of the global internal tides, which included only tidal forcing and which utilized a horizontally varying
270
E. P. Chassignet
stratification, the stratification can vary horizontally in a model that also includes wind- and buoyancy-forcing. Arbic et€al. (2010) show that the horizontally varying stratification affects tides to first order, especially in polar regions. Inclusion of tides in general circulation models is also more likely to properly account for the effects of the quadratic bottom boundary layer drag term. Many ocean general circulation models insert an assumed tidal background flow, typically taken to be about 5€cm/s, into the quadratic drag formulation (e.g., Willebrand et€al. 2001). However, in the actual ocean tidal velocities vary from about 1–2€cm/s in the abyss to about 0.5–1€m/s in areas of large coastal tides. Thus an assumed tidal background flow of 5€cm/s is too strong in the abyss and too weak in coastal areas. By actually resolving the (spatially inhomogeneous) tidal flows in a general circulation model, this problem can be corrected. Barotropic Motions╇ The use of high-frequency (e.g., 3-hourly) forcing generates strong non-steric barotropic motions that are not temporally resolved by satellite altimeters (Stammer et€al. 2000). In addition, Shriver and Hurlburt (2000) report that between 5 and 10€cm rms SSH non-steric variability are generated in major current systems throughout the world ocean. Viscosity Closure╇ Despite the smaller mesh size, the viscosity parameterization remains of importance for the modeled large-scale ocean circulation (Chassignet and Garraffo 2001; Chassignet and Marshall 2008; Hecht et€al. 2008). When the grid spacing reaches a certain threshold, the energy cascade from small to large scales should be properly represented by the model physics. Dissipation should then be prescribed for numerical reasons only to remove the inevitable accumulation of enstrophy on the grid scale. This is the reason higher-order operators such as the biharmonic form of friction have traditionally been favored in eddy-resolving or eddy-permitting numerical simulations. Higher-order operators remove numerical noise on the grid scale and leave the larger scales mostly untouched by allowing dynamics at the resolved scales of motion to dominate the subgrid-scale parameterization (Griffies and Hallberg 2000). In addition to numerical closure, the viscosity operator can also be a parameterization of smaller scales. One of the most difficult tasks in defining the parameterization is the specification of the Reynolds stresses in terms of only the resolved scales’ velocities. The common practice has been to assume that the turbulent motion acts on the large-scale flow in a manner similar to molecular viscosity. However, the resulting Laplacian form of dissipation removes both kinetic energy and enstrophy over a broad range of spatial scales, and its use in numerical models in general implies less energetic flow fields than in cases with more highly scale-selective dissipation operators. Some Laplacian dissipation is still needed to define viscous boundary layers and to remove eddies on space scales too large to be removed by biharmonic dissipation and too small to be numerically accurate at the model grid resolution. Coastal Transition Zones╇ A strong demand for ocean forecasts will come from the offshore industry, which has extended its activities from the shallow shelf seas to exploration and production on the continental slope, where oceanographic condi-
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
271
tions play a much more critical role in safe and environmentally acceptable operations. Exploration and production are now taking place in water depths in excess of 2000€m in a number of oil and gas basins around the world. The proper modeling of the transition area between the deep ocean and the shallow continental shelves imposes strong requirements on the ocean model. It should be capable of modeling the typical shallow waters on the shelf, with their characteristic well-mixed water masses and strong tidal and wind-driven currents. Furthermore, it also must properly represent and distinguish between water masses of vastly different characteristics in the deep ocean and near the surface during very long time integrations. The interaction with the continental shelf/slope is also an intriguing problem due to the impact on internal tides and the wave modes developing and propagating along the continental shelf/slope. This includes remotely generated wave modes, such as equatorially generated Kelvin waves, which play a large role in El Niño events and which can strongly impact distant coastal regions.
11.4â•…On the Use of Potential Density as a Vertical Coordinate As stated in the introduction, the choice of a vertical coordinate system is the single most important aspect of an ocean model’s design and, because of the practical issues of representation and parameterization, many of the challenges listed in the previous sections are directly linked to the vertical coordinate choice. There is no “best” choice of vertical coordinate since all solutions of the discretized equations with any vertical coordinate should converge toward the solutions of the corresponding differential equations as mesh size goes to zero. Each coordinate system is afflicted with its own set of truncation errors, the implications of which must be understood and prioritized. Isopycnic (potential density) coordinate modeling seeks to eliminate truncation errors by reversing the traditional role of depth as an independent variable and potential density as a dependent variable. More specifically, mixing in turbulent stratified fluids where buoyancy effects play a role takes place predominantly along isopycnic or constant potential density surfaces (Iselin 1939; Montgomery 1940; McDougall and Church 1986). If the conservation equations for salt and temperature are discretized in (↜x,y,z) space, that is, if the three-dimensional vectorial transport of these quantities is numerically evaluated as the sum of scalar transports in (↜x,y,z) direction, experience shows that it is virtually impossible to avoid diffusion of the transported variables in those three directions (Veronis 1975; Redi 1982; Cox 1987; Gent and McWilliams 1990, 1995; Griffies et€al. 2000b). Thus, regardless of how the actual mixing term in the conservation equations is formulated, numerically induced mixing is likely to have a cross-isopycnal (“diapycnal”) component that may well overshadow the common diapycnal processes that occur in nature (Griffies et€ al 2000b). This type of truncation error can be mostly eliminated in isopycnic coordinate modeling by transforming the dynamic equations from (↜x,y,z)
272
E. P. Chassignet
to (↜x,y,ρ) coordinates (Bleck 1978, 1998; Bleck et€al. 1992; Bleck and Chassignet 1994). First, the prognostic primitive equations are rewritten in (↜x,y,s) coordinates where s is an unspecified generalized vertical coordinate (Bleck 2002). ∂p ∂v ∂v v2 ∂τ =p∇s α − ∇α M − g + ∇s + (ς + f) k × v + s˙ ∂ts 2 ∂s ∂p ∂p −1 ∂p ∂p + ∇s · ν ∇s v (11.1) ∂s ∂s ∂ ∂ts
∂ ∂ts
∂p ∂s
∂p + ∇s · v ∂s
∂ + ∂s
∂p s˙ =0 ∂s
∂p ∂p ∂p ∂ ∂p θ + ∇s · v θ + s˙ θ = ∇s · µ ∇s θ + Hθ ∂s ∂s ∂s ∂s ∂s
(11.2)
(11.3)
where vâ•›=â•›(u,v) is the horizontal vector, p is pressure, θ represents any one of −1 the model’s thermodynamic variables, α = ρpot is the potential specific volume, ς = ∂v/∂xs − ∂u/∂ys is the relative vorticity, M = gz + pα is the Montgomery potential, f is the Coriolis parameter, k is the vertical unit vector, ν and µ are the eddy viscosity and diffusivity, τ is the wind- and/or bottom-drag induced shear stress vector, and Hθ is the sum of diabatic source terms acting on θ including diapycnal mixing. Subscripts indicate which variable is held constant during partial differentiation. Distances in the x,y direction, as well as their time derivatives u,v, are measured in the projection onto an horizontal plane. This conversion renders the coordinate system nonorthogonal in 3-D space, but eliminates metric terms related to the slope of the s surface (Bleck 1978). Other metric terms, created when vector products involving (∇·) or (∇×) are evaluated on a non Cartesian grid (e.g. spherical coordinates), are absorbed into the primary terms by evaluating vorticity and horizontal flux divergences in (11.1)–(11.3) as line integrals around individual grid boxes (see Griffies et€al. 2000a for more details). Note that applying ∇ to a scalar, such as v2/2 in (11.1), does not give rise to metric terms. Second, by performing a vertical integration over a coordinate layer bounded by two surfaces stop and sbot, the continuity Eq.€(11.2) becomes a prognostic equation for the layer weight per unit area, p = pbot − ptop .
∂p ∂p ∂p + ∇s · (vp) + s˙ − s˙ =0 ∂ts ∂s bot ∂s top
(11.4)
The expression (˙s ∂p/∂s) represents the vertical mass flux across an s surface, taken to be positive in the +p downward direction. Multiplication of (11.1) by ∂p/∂s and integration over the interval (↜stop, sbot), followed by division by p/s , changes the shear stress term to g/p(τ top − τbot ) while the lateral momentum mixing term
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
273
integrates to (p)−1 ∇s ·(νp∇s v) . All the other terms in (11.1) retain their formal appearance. The layer integrated form of (11.3) is ∂ ∂p ∂p (pθ ) + ∇s ·(vpθ ) + s˙ θ − s˙ θ = ∇s ·(µp∇s θ ) + Hθ (11.5) ∂ts ∂s bot ∂s top
The above prognostic equations are complemented by several diagnostic equations, including the hydrostatic equation, ∂M/∂α = p, an equation of state linking potential temperature T, salinity S, and pressure p to ρpot, and an equation prescribing the vertical mass flux (˙s ∂p/∂s) through an s surface. Isopycnic models solve the above equations by using the potential density, ρpot, as the vertical coordinate s. Where the fluid is adiabatic, potential density is conserved and transport in the x,y direction then takes place in the model on isopycnic surfaces. This makes isopycnic models very adiabatic and allows them to avoid introducing the numerical diffusion in the vertical that can be troublesome in z or σ coordinate models. Transport in the z direction translates into transport along the ρpot axis and can be entirely suppressed if so desired; that is, it has no unwanted diapycnal component. As a result, spurious heat exchange between warm surface waters and cold abyssal waters and horizontal heat exchange across sloping isopycnals such as those marking frontal zones are minimized. Potential density surfaces are not, however, neutral surfaces (McDougall and Church 1986) and dianeutral fluxes may therefore be present when potential density is used as the vertical coordinate. As coordinate surfaces deviate from neutral, advection and diffusion acting along these surfaces will induce some dianeutral mixing. The impact of dianeutral fluxes from diffusion can be reduced/eliminated by rotating the diffusion operator to act along neutral directions in a manner analogous to that employed in fixed coordinates models (Griffies et€al. 2000a). Furthermore, the nonlinear equation of state in the ocean introduces new physical sources of mixing, i.e., the independent transport of two active tracers (temperature and salinity) requires remapping algorithms to retain fields within pre-specified density classes. The level of dianeutral mixing introduced by remapping algorithms is usually negligible, but it has yet to be systematically documented (Griffies and Adcroft 2008). A reference pressure of 2000€db is now the norm for isopycnic coordinates models since it leads to few regions with coordinate inversions and the slopes of the σ2 (potential density referred to 2000€ db) surfaces are closest to neutral surfaces. Inclusion of thermobaricity (i.e., compressibility of sea water in the equation of state) in isopycnic coordinate ocean models is described in Sun et€al. (1999) and Hallberg (2005). The necessity to properly reconcile estimates of the free surface height when using mode-splitting time stepping scheme is discussed in Hallberg and Adcroft (2009). The key advantages of isopycnic coordinate models can be summarized as follows: (a) they are well suited for representing tracer transport without any large numerically induced vertical mixing as long as the isopycnals are reasonably parallel to neutral surfaces; (b) they conserve density classes under adiabatic motions; (c) the bottom topography is represented in a piecewise linear fashion, hence avoiding
274
E. P. Chassignet
the need to distinguish bottom from side as traditionally done in z coordinate models; and (d) the overflows are well represented. The main drawback of an isopycnic coordinate model is its inability to properly represent the surface mixed layer or the bottom boundary layer since these layers are mostly unstratified. Examples of isopycnal models are NLOM (Wallcraft et€al 2003), MICOM (Bleck et€al. 1992; Bleck and Chassignet 1994; Bleck 1998), HIM (Hallberg 1995, 1997), OPYC (Oberhuber 1993) and POSUM. As already stated, none of three main vertical coordinates currently in use (↜z, isopycnal, or σ) provides universal utility, and hybrid approaches have been developed in an attempt to combine the advantages of different types of vertical coordinates in optimally simulating the ocean. The term “hybrid vertical coordinates” can mean different things to different people: it can be a linear combination of two or more conventional coordinates (Song and Haidvogel 1994; Ezer and Mellor 2004; Barron et€al. 2006) or it can be truly generalized, i.e., aiming to mimic different types of coordinates in different regions of a model domain (Bleck 2002; Burchard and Beckers 2004; Adcroft and Hallberg 2006; Song and Hou 2006). Adcroft and Hallberg (2006) classify generalized coordinates ocean models as either a Lagrangian Vertical Direction (LVD) or an Eulerian Vertical Direction (EVD) models. In LVD models, the continuity (thickness tendency) equation is solved forward in time throughout the domain, while an Arbitrary Lagrangian-Eulerian (ALE) technique is used to re-map the vertical coordinate and maintain different coordinate types within the domain. This differs from the EVD models with fixed depth and terrainfollowing vertical coordinates that use the continuity equation to diagnose vertical velocity. The hybrid or generalized coordinate ocean models that have much in common with isopycnal models and are classified as LVD models are POSEIDON (Schopf and Loughe 1995) and HYCOM (Bleck 2002; Chassignet et€ al. 2003; Halliwell 2004). Other generalized vertical coordinate models currently under development are HYPOP and GOLD. HYPOP is the hybrid version of POP and differs from HYCOM and POSEIDON in the sense that the momentum equations continue to be solved on z coordinates while the tracer equations are solved using an ALE scheme for the vertical coordinate. Such an approach allows the model to utilize depth as the vertical coordinate in the mixed layer while using a more Lagrangian (e.g. isopycnal) coordinate in the deep ocean. GOLD, the “Generalized Ocean Layer Dynamics” model, is intended to be the vehicle for the consolidation of all of the climate ocean model development efforts at GFDL, including MOM and HIM.
11.5â•…Application: The HYbrid Coordinate Ocean Model (HYCOM) The generalized vertical coordinates in HYCOM deviate from isopycnals (constant potential density surfaces) wherever the latter may fold, outcrop, or generally provide inadequate vertical resolution in portions of the model domain.
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
275
HYCOM is at its core a Lagrangian layer model, except for the remapping of the vertical coordinate by the hybrid coordinate generator after all equations are solved (Bleck 2002; Chassignet et€al. 2003; Halliwell 2004) and for the fact that there is a nonzero horizontal density gradient within all layers. HYCOM is thus classified as an LVD model. The ability to adjust the vertical spacing of the coordinate surfaces in HYCOM simplifies the numerical implementation of several physical processes (e.g., mixed layer detrainment, convective adjustment, sea ice modeling) without harming the model of the basic and numerically efficient resolution of the vertical that is characteristic of isopycnic models throughout most of the ocean’s volume (Bleck and Chassignet 1994; Chassignet et€al. 1996). HYCOM is the result of a collaboration initiated in the late nineties by ocean modelers at the Naval Research Laboratory, Stennis, MS, who approached colleagues at the University of Miami’s Rosenstiel School of Marine and Atmospheric Science regarding an extension of the range of applicability of the U.S. Navy operational ocean prediction system to coastal regions (e.g., the U.S. Navy systems at the time were seriously limited in shallow water and in handling the transition from deep to shallow water). HYCOM (Bleck 2002) was therefore designed to extend the range of existing operational Ocean General Circulation Models (OGCMs). The freedom to adjust the vertical spacing of the generalized (or hybrid) coordinate layers in HYCOM simplifies the numerical implementation of several processes and allows for a smooth transition from the deep ocean to coastal regimes. HYCOM retains many of the characteristics of its predecessor, MICOM, while allowing coordinates to locally deviate from isopycnals wherever the latter may fold, outcrop, or generally provide inadequate vertical resolution. The collaboration led to the development of a consortium for hybrid-coordinate data assimilative ocean modeling supported by NOPP to make HYCOM a stateof-the-art community ocean model with data assimilation capability that could (1) be used in a wide range of ocean-related research; (2) become the next generation eddy-resolving global ocean prediction system; and (3) be coupled to a variety of other models, including littoral, atmospheric, ice and bio-chemical. The HYCOM consortium became one of the U.S. components of GODAE, a coordinated international system of observations, communications, modeling, and assimilation that delivers regular, comprehensive information on the state of the oceans (see Chassignet and Verron (2006) for a review). Navy and NOAA applications such as maritime safety, fisheries, the offshore industry, and management of shelf/coastal areas are among the expected beneficiaries of the HYCOM ocean prediction systems (http://www.hycom.org). More specifically, the precise knowledge and prediction of ocean mesoscale features helps the Navy, NOAA, the Coast Guard, the oil industry, and fisheries with endeavours such as ship and submarine routing, search and rescue, oil spill drift prediction, open ocean ecosystem monitoring, fisheries management, and short-range coupled atmosphereocean, coastal and near-shore environment forecasting. In addition to operational eddy-resolving global and basin-scale ocean prediction systems for the U.S. Navy and NOAA, respectively, this project offered an outstanding opportunity for
276
E. P. Chassignet
NOAA-Navy collaboration and cooperation ranging from research to the operational level (see Chassignet et€al. 2009).
11.5.1 Hybrid Coordinate Generator In HYCOM, the optimal vertical coordinate distribution of the three vertical coordinate types (pressure, isopycnal, sigma) is chosen at every time step and in every grid column individually. The default configuration of HYCOM is isopycnic in the open stratified ocean, but it makes a dynamically and geometrically smooth transition to terrain-following coordinates in shallow coastal regions and to fixed pressure-level (mass conserving) coordinates in the surface mixed layer and/or unstratified open seas. In doing so, the model takes advantage of the different coordinate types in optimally simulating coastal and open-ocean circulation features (Chassignet et€al. 2003, 2006, 2007, 2009). A user-chosen option allows specification of the vertical coordinate separation that controls the transition among the three coordinate systems (Chassignet et€ al. 2007). The assignment of additional coordinate surfaces to the oceanic mixed layer also allows the straightforward implementation of multiple vertical mixing turbulence closure schemes (Halliwell 2004). The choice of the vertical mixing parameterization is also of importance in areas of strong entrainment, such as overflows (Papadakis et€al. 2003; Xu et€al. 2006, 2007; Legg et€al. 2009). The implementation of the generalized vertical coordinate in HYCOM follows the theoretical foundation set forth in Bleck and Boudra (1981) and Bleck and Benjamin (1993): i.e., each coordinate surface is assigned a reference isopycnal. The model continually checks whether grid points lie on their reference isopycnals, and, if not, attempts to move them vertically toward the reference position. However, the grid points are not allowed to migrate when this would lead to excessive crowding of coordinate surfaces. Thus, vertical grid points can be geometrically constrained to remain at a fixed pressure depth while being allowed to join and follow their reference isopycnals in adjacent areas (Bleck 2002). After the model equations are solved, the hybrid coordinate generator then relocates vertical interfaces to restore isopycnic conditions in the ocean interior to the greatest extent possible, while enforcing the minimum thickness requirements between vertical coordinates (see Chassignet et€al. (2007) for details). If a layer is less dense than its isopycnic reference density, the generator attempts to move the bottom interface downward so that the flux of denser water across this interface increases density. If the layer is denser than its isopycnic reference density, the generator attempts to move the upper interface upward to decrease density. In both cases, the generator first calculates the vertical distance over which the interface must be relocated so that volume-weighted density of the original plus new water in the layer equals the reference density. The minimum permitted thickness of each layer at each model grid point is then calculated using the criteria provided by the user and the final minimum thickness is then calculated using a “cushion” function (Bleck 2002) that produces a smooth transi-
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
277
tion from the isopycnic to the p and σ domains. The minimum thickness constraint is not enforced at the bottom in the open ocean, permitting the model layers to collapse to zero thickness there, as in MICOM. Repeated execution of this algorithm at every time step maintains layer density very close to its reference value as long as a minimum thickness does not have to be maintained and diabatic processes are weak. To ensure that a permanent p coordinate domain exists near the surface year round at all model grid points, the reference densities of the uppermost layers are assigned values smaller than any density values found in the model domain. Figure€11.2 illustrates the transition that occurs between p/σ and isopycnic (↜ρpot) coordinates in the fall and spring in the upper 400€m and over the shelf in the East
Fig. 11.2↜渀 Upper 400€m north-south velocity cross-section along 124.5°E in a 1/25° East China and Yellow Seas HYCOM embedded in a 1/6° North Pacific configuration forced with climatological monthly winds. a In the fall, the water column is stratified over the shelf and can be represented with isopycnals (↜ρpot). b In the spring, the water column is homogenized over the shelf and the vertical coordinate becomes a mixture of pressure (↜p) levels and terrain-following (↜σ) levels. The isopycnic layers are numbered over the shelf; the higher the number, the denser the layer. (From Chassignet et€al. 2007)
278
E. P. Chassignet
China and Yellow Seas. In the fall, the water column is stratified and can be largely represented with isopycnals; in the spring, the water column is homogenized over the shelf and is represented by a mixture of p and σ coordinates. A particular advantage of isopycnic coordinates is illustrated by the density front formed by the Kuroshio above the peak of the sharp (lip) topography at the shelfbreak in Fig.€11.2a. Since the lip topography is only a few grid points wide, this topography and the associated front is best represented in isopycnic coordinates. In other applications in the coastal ocean, it may be more desirable to provide high resolution from surface to bottom to adequately resolve the vertical structure of water properties and of the bottom boundary layer. Since vertical coordinate choices for open-ocean HYCOM runs typically maximize the fraction of the water column that is isopycnic, it is often necessary to add more layers in the vertical-to-coastal HYCOM simulations nested within larger-scale HYCOM runs. An example using nested West Florida Shelf simulations (Halliwell et€al. 2009) is illustrated in the cross sections in Fig.€11.3. The original vertical discretization is compared to two others with six layers added at the top: one with p coordinates and the other with σ coordinates over the shelf. This illustrates the flexibility with which vertical coordinates can be chosen by the user. Maintaining hybrid vertical coordinates can be thought of as upwind finite volume advection. The original grid generator (Bleck 2002) used the simplest possible scheme of this type, the 1st order donor-cell upwind scheme. A major advantage of this scheme is that moving a layer interface does not affect the layer profile in the down-wind (detraining) layer, which greatly simplifies re-mapping to isopycnal layers. However, the scheme is diffusive when layers are re-mapped (there is no diffusion when layer interfaces remain at their original location). Isopycnal layers require minimal re-mapping in response to weak interior diapycnal diffusivity, but fixed coordinate layers often require significant re-mapping, especially in regions with significant upwelling or downwelling. Therefore, to minimize diffusion associated with the remapping, the grid generator was first replaced by a piecewise linear method (PLM) with a monotonized central-difference (MC) limiter (van Leer 1977) for layers that are in fixed coordinates while still using donor-cell upwind for layers that are non-fixed (and hence tending to isopycnal coordinates). PLM replaces the “constant within each layer” profile of donor-cell with a linear profile that equals the layer average at the center of the layer. The slope must be limited to maintain monotonicity. There are many possible limiters, but the MC limiter is one of the more widely used (Leveque 2002). The most recent version of the grid generator uses a weighted essentially non-oscillatory (WENO)-like piecewise parabolic method (PPM) scheme for increased accuracy. The generator also has been modified for situations when there is a too-light layer on top of a too-dense layer, i.e., when each layers attempts to gain mass at the expense of the other. Previously the generator chose each layer half of the time, but in practice the thicker of the two layers tended to gain mass and over time, the thinner layer tended to become very thin and stay that way. Now the thinner of the two layers always gains mass from the thicker layer, greatly reducing this tendency for layers to collapse.
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
279
Fig. 11.3↜渀 Cross-sections of layer density and model interfaces across the West Florida Shelf in a 1/25° West Florida Shelf subdomain covering the Gulf of Mexico east of 87°W and north of 23°N (Halliwell et€al. 2009). (From Chassignet et€al. 2006, 2007)
280
E. P. Chassignet
11.5.2 T he HYCOM Ocean Prediction Systems (http://www.hycom.org) Data assimilation is essential for ocean prediction because (a) many ocean phenomena are due to nonlinear processes (i.e., flow instabilities) and thus are not a deterministic response to atmospheric forcing; (b) errors exist in the atmospheric forcing; and (c) ocean models are imperfect, including limitations in numerical algorithms and in resolution. Most of the information about the ocean surface’s space-time variability is obtained remotely from instruments aboard satellites (i.e. sea surface height and sea surface temperature), but these observations are insufficient for specifying the subsurface variability. Vertical profiles from expendable bathythermographs (XBT), conductivity-temperature-depth (CTD) profilers, and profiling floats (e.g., Argo, which measures temperature and salinity in the upper 2000€m of the ocean) provide another substantial source of data. Even together, these data sets are insufficient to determine the state of the ocean completely, so it is necessary to use prior statistical knowledge based on past observations as well as our present understanding of ocean dynamics. By combining all of these observations through data assimilation into an ocean model, it is possible, in principle, to produce a dynamically consistent depiction of the ocean. However, to have any predictive capabilities, it is extremely important that the freely evolving ocean model (i.e., non-data-assimilative model) has skill in representing ocean features of interest. To properly assimilate the SSH anomalies determined from satellite altimeter data, the oceanic mean SSH over the altimeter observation period must be provided. In this mean, it is essential that the mean current systems and associated SSH fronts be accurately represented in terms of position, amplitude, and sharpness. Unfortunately, the earth’s geoid is not presently known with sufficient accuracy for this purpose, and coarse hydrographic climatologies (~0.5°–1° horizontal resolution) cannot provide the spatial resolution necessary when assimilating SSH in an eddyresolving model (horizontal grid spacing of 1/10° or finer). At these scales of interest, it is essential to have the observed means of boundary currents and associated fronts sharply defined (Hurlburt et€al. 2008). Figure€11.4 shows the climatological mean derived on a 0.5° grid using surface drifters by Maximenko and Niiler (2005) as well as a mean derived for the 1/12° Navy global HYCOM prediction system (see following section for details). The HYCOM mean was constructed as follows: a 5-year mean SSH field from a non-data assimilative 1/12° global HYCOM run was compared to available climatologies and a rubber-sheeting technique (Carnes et€al. 1996) was used to modify the model mean in two regions (the Gulf Stream and the Kuroshio) where the western boundary current extensions were not well represented and where an accurate frontal location is crucial for ocean prediction. Rubber-sheeting involves a suite of computer programs that operate on SSH fields, overlaying contours from a reference field and moving masses of water in an elastic way (hence rubber-sheeting).
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
281
Fig. 11.4↜渀 Mean SSH (in cm) derived from surface drifters (Maximenko and Niiler 2005) (↜top panel) and from a non-data assimilative HYCOM run corrected in the Gulf Stream and Kuroshio regions using a rubber-sheeting technique (↜bottom panel). The RMS difference between the two fields is 9.2€cm. (From Chassignet et€al. 2009)
Two systems are currently run in real-time by the U.S. Navy at NAVOCEANO, Stennis Space Center, MS, and by NOAA at NCEP, Washington, D.C. The first system is the NOAA Real Time Ocean Forecast System for the Atlantic (RTOFSAtlantic), which has been running in real-time since 2005. The Atlantic domain spans 25°S–76°N with a horizontal resolution varying from 4€ km near the U.S. coastline to 20€ km near the African coast. The system is run daily with one-day nowcasts and five-day forecasts. Prior to June 2007, only the sea surface temperature was assimilated. In June 2007, NOAA implemented the 3D-Var data assimilation of (1) sea surface temperature and sea surface height (JASON, GFO, and soon ENVISAT), (2) temperature and salinity profile assimilation (ARGO, CTD, moorings, etc.), and (3) GOES data. Plans are to expand this system globally using the U.S Navy configuration described in the following paragraph. The NCEP
282
E. P. Chassignet
RTOFS-Atlantic model data is distributed in real time through NCEP’s operational ftp server (ftp://ftpprd.ncep.noaa.gov) and the NOAA Operational Model Archive and Distribution System (NOMADS, http://nomads6.ncdc.noaa.gov/ncep_data/index.html) server. The latter server is also using OPeNDAP middleware as a data access method. NCEP’s RTOFS-Atlantic model data is also archived at the National Oceanographic Data Center (NODC, http://data.nodc.noaa.gov/ncep/rtofs). The second system is the global U.S. Navy nowcast/forecast system using the 1/12° global HYCOM (6.5€km grid spacing on average, 3.5€km grid spacing at North Pole, and 32 hybrid layers in the vertical), which has been running in near real-time since December 2006 and in real-time since February 2007. The current ice model is thermodynamic, but it will soon include more physics as it is upgraded to the Polar Ice Prediction System (PIPS, based on the Los Alamos CICE ice model). The model is currently running daily at NAVOCEANO using a part of the operational allocation on the machine. The daily run consists of a 5-day hindcast and a 5-day forecast. The system assimilates (1) SSH (Envisat, GFO, and Jason-1), (2) SST (all available satellite and in-situ sources), (3) all available in-situ temperature and salinity profiles (ARGO, CTD, moorings, etc.), and (4) SSMI sea ice concentration. The three-dimensional multivariate optimum interpolation Navy Coupled Ocean Data Assimilation (NCODA) (Cummings 2005) system is the assimilation technique. The NCODA horizontal correlations are multivariate in geopotential and velocity, thereby permitting adjustments (increments) to the mass field to be correlated with adjustments to the flow field. The velocity adjustments are in geostrophic balance with the geopotential increments, and the geopotential increments are in hydrostatic agreement with the temperature and salinity increments. Either the Cooper and Haines (1996) technique or synthetic temperature and salinity profiles (Fox et€al. 2002) can be used for downward projection of SSH and SST. An example of forecast performance is shown in Fig.€11.5 Validation of the results is underway using independent data with a focus on the large-scale circulation features, SSH variability, eddy kinetic energy, mixed layer depth, vertical profiles of temperature and salinity, SST and coastal sea levels (Metzger et€al. 2008). Figures€11.6 and 11.7 show examples for the Gulf Stream region while Fig.€11.8 documents the performance of HYCOM in representing the mixed layer depth. HYCOM is also an active participant in the international GODAE comparison of global ocean forecasting systems.
11.5.3 Distribution of Global HYCOM Hindcasts and Forecasts The model outputs from the global U.S. Navy hindcast experiment from November 2003 to present are available through the HYCOM consortium web page, http:// www.hycom.org. The HYCOM data distribution team developed and implemented a comprehensive data management and distribution strategy that allowed easy and
Fig. 11.5↜渀 Verification of 30-day ocean forecasts: median SSH anomaly correlation vs. forecast length in comparison with the verifying analysis for the global U.S Navy HYCOM over the world ocean and five subregions. The red curves verify forecasts using operational atmospheric forcing, which reverts toward climatology after five days. The green curves verify “forecasts” with analysis quality forcing for the duration and the blue curves verify forecasts of persistence (i.e., no change from the initial state). The plots show median statistics over twenty 30-day HYCOM forecasts initialized during January 2004—December 2005, a period when data from three nadir-beam altimeters, Envisat, GFO and Jason-1, were assimilated. The reader is referred to Hurlburt et€al. (2008, 2009) for a more detailed discussion of these results. (From Chassignet et€al. 2009)
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE 283
284
E. P. Chassignet
Fig. 11.6↜渀 Surface (↜top panels) and 700€m (↜lower panels) eddy kinetic energy from observations (↜left panels) and HYCOM over the period 2004–2006 (↜right panels). The observed surface eddy kinetic energy (↜upper left panel) is from Fratantoni (2001) and the 700€ m eddy kinetic energy (↜lower left panel) is from Schmitz (1996). The units are in cm2/s2. Overlaid on the top panels is the Gulf Stream north wall positionâ•›±1 standard deviation. (From Chassignet et€al. 2009)
efficient access to the global HYCOM-based ocean prediction system output to (a) coastal and regional modeling groups; (b) the wider oceanographic and scientific community, including climate and ecosystem researchers; and (c) the general public. The outreach system consists of a web server that acts as a gateway to backend data management, distribution, and visualization applications (http://www.hycom. org/dataserver). These applications enable end users to obtain a broad range of services such as browsing of datasets, GIF images, NetCDF files, FTP requests of data, etc. The 130 Terabytes HYCOM Data Sharing System is built upon two existing software components: the Open Project for a Network Data Access Protocol (OPeNDAP) (Cornillon et€al. 2009) and the Live Access Server (LAS) (http://ferret. pmel.noaa.gov/LAS/). These tools and their data distribution methods are described below. In the current setup, the OPeNDAP component provides the middleware necessary to access distributed data, while the LAS functions as a user interface and a product server. The abstraction offered by the OPeNDAP server also makes it possible to define a virtual data set that LAS will act upon, rather than physical files. An
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
285
Fig. 11.7↜渀 Modeled analysis of the sea surface height field on September 8, 2008. The white line represents the independent frontal analysis of sea surface temperature observations performed by the Naval Oceanographic Office. (From Chassignet et€al. 2009)
OPeNDAP “aggregation server” utilizes this approach to append model time steps from many separate files into virtual datasets. The HYCOM Data Service has been in operation for the last four years and has seen a steady increase in the user base. In the last year, the service received approximately 20,000 hits per month. In addition to the numerous requests from educational institutions and researchers, this service has been providing near real-time data products to several private companies in France, the Netherlands, Portugal, and the U.S.
11.5.4 B oundary Conditions for Regional and Coastal Models Nested in HYCOM An important attribute of the data assimilative HYCOM system is its capability to provide boundary conditions to even higher resolution regional and coastal models. The current horizontal and vertical resolution of the global forecasting
286
E. P. Chassignet
Fig. 11.8↜渀 Median bias error (in meters) of mixed layer depth (MLD) calculated from simulated and approximately 66000 unassimilated observed profiles over the period June 2007—May 2008. Blue (↜red) indicates a simulated MLD shallower (↜deeper) than observed; 53% of the simulated MLDs are within 10€m of the observation and these are represented as gray. The basin-wide median bias error is −6.6€m and the RMS error is 40€m. (From Chassignet et€al. 2009)
system marginally resolves the coastal ocean (7€km at mid-latitudes, with up to 15 terrain-following (↜σ) coordinates over the shelf), but it is an excellent starting point for even higher resolution coastal ocean prediction efforts. Several partners within the HYCOM consortium evaluated the boundary conditions and demonstrated the value added by the global and basin HYCOM data assimilative system output for coastal ocean prediction models. The inner nested models may or may not be HYCOM (i.e., the nesting procedure can handle any vertical grid choice). Outer model fields are interpolated to the horizontal and vertical grid of the nested model throughout the entire time interval of the nested model simulation at a time interval specified by the user, typically once per day. The nested model is initialized from the first archive file and the entire set of archives provides boundary conditions during the nested run, ensuring consistency between initial and boundary conditions. This procedure has proven to be very robust. Figure€11.9 shows an example of the SST and surface velocity fields from a ROMS (Shchepetkin and McWilliams 2005) West Florida Shelf domain embedded in the U.S. Navy HYCOM ocean prediction system. The Gulf of Mexico Loop Current is the main large-scale ocean feature impacting the WFS and the impact of open boundary conditions on the dynamics and accuracy of the regional model was assessed by Barth et€al. (2008). Examples can be found in Chassignet et€al. (2006, 2009).
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
287
Fig. 11.9↜渀 Sea surface temperature (°C) and surface velocity fields from the ROMS West Florida Shelf domain (↜inside the dashed lines) and the HYCOM ocean prediction system (↜outside the dashed lines). (From Chassignet et€al. 2009)
11.5.5 HYCOM Long-Term Development The long-term goals of the HYCOM consortium for the global domain are to (a) add 3-D and 4-D VAR data assimilation, (b) increase the horizontal resolution of the global domain to 1/25°, (c) implement two-way nesting, (d) implement zero depth coastlines with wetting and drying, and (e) include tides. The scientific goals include, but are not be limited to (a) evaluation of the internal tides representation in support of field programs, (b) evaluation of the global model’s ability to provide boundary conditions to very high resolution coastal models, (c) interaction of the open ocean with ice, (d) shelf-deep ocean interactions, (e) upper ocean physics including mixed layer/sonic depth representation, and (f) mixing processes. Other research activities will focus on coupled ocean-wave-atmosphere prediction, biogeo-chemical-optical and tracer/contaminant prediction, ecosystem analysis and prediction, and earth system prediction (i.e., coupled atmosphere-ocean-ice-land).
11.6â•…Outlook One of the greatest uncertainties in setting up a data assimilative system is the error one needs to attribute to the numerical model. To a certain extent, the rate at which a model moves away from the assimilative state will provide some indication of the model’s performance. A careful comparison with observations in assessing the
288
E. P. Chassignet
model’s performance with and without data assimilation will help in identifying the model biases and the areas that need major improvements, either in representation or in parameterization. The routine analysis of model forecasts will provide a wealth of information the modeler can use to improve the model’s physics, especially if additional forecasts/hindcasts can be performed after the fact to assess the effectiveness of the changes. Much of the uncertainty associated with ocean prediction can be ascribed to an imperfect knowledge of the ocean and its mechanisms for mitigating or exacerbating changes in the atmosphere and cryosphere. Oceanic predictions rely both on the ability to initialize a model to agree with observed conditions and on the model’s ability to accurately evolve this initial state. There are several classes of numerical circulation models that have achieved a significant level of community management and involvement, including shared development, regular user interaction, and ready availability of software and documentation via the worldwide web. The numerical codes are typically maintained within university user groups or by government laboratories; their development primarily resulted from individual efforts, rather than a cohesive community effort. While this was appropriate in a previous era when oceanic modeling was a smaller enterprise and computer architectures were simpler, the limitations of this approach are increasingly apparent in an era when many diverse demands are being placed upon oceanic models. The time has come for the development of modeling capabilities to become a coherent community effort that both systematically advances the models and supports widespread access. It is also important to state that one cannot separate the effective development of oceanic ecological and biogeochemical models from that of the physical circulation model, and these extended modeling capabilities need to be an integral part of this community effort. Moreover, testing of these models with observations requires advanced inverse methods and data assimilation techniques that must be linked to this effort from the outset. Deliberations among physical and biogeochemical modelers led to the submission in 2006 to the U.S. National Science Foundation of a white paper entitled “Enabling a Community Environment for Advanced Oceanic Modeling”, written by E. Chassignet, S. Doney, R. Hallberg, D. McGillicuddy, and J. McWilliams. It described issues confronted by a large and growing fraction of the ocean modeling community concerning the complexity and redundancy in ocean model development. It also outlined a long-term vision and course of action to address these concerns by proposing the development of a Community Environment for Advanced Ocean Modeling. Such an environment would: • Create a common code base to allow a synthesis of different algorithmic elements. • Provide a test bed to allow for exploration of the merits of different approaches in representing the important model elements, leading to recommendations for best practices. • Provide estimates of model uncertainty by performing, for a given configuration, ensemble calculations with a variety of algorithms, vertical-coordinate and other discretizations, parameterizations, etc.
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
289
• Include the core algorithms for evaluation of current practices in marine ecological and biogeochemical modeling. • Make available standard data sets to facilitate comparison to observations as well as algorithm development and testing. • Facilitate linkage with inverse methods for testing models with observations, as well as data assimilation techniques for use in prediction and in the state estimation problem. • Encourage collaboration among model developers to accelerate the pace of designing and testing new algorithms. • Provide rapid community access to model advancements. It is important to make the distinction between the proposed community ocean modeling environment and a single community ocean model. The proposed modeling environment would provide a common, interchangeable code base with minimized restrictions on the algorithms that can be contributed or selected for a specific model application. Whereas many model algorithm developers would find a single community ocean model to be stifling, a community modeling environment should dramatically invigorate the development of new and superior ocean modeling techniques. This environment will offer a much broader range of options than would be possible with a single monolithic model. This diversity of options is critical for selecting the most appropriate configuration for any particular oceanic application. The 10-year vision is to have a broad unification of physical, ecological, and biogeochemical oceanic modeling tools and practices by collecting the expertise of the current sigma, geopotential, and isopycnic/hybrid vertical-coordinate models in a single open and multi-disciplinary software framework. This will allow the greatest possible flexibility for users and synergies for model developers. The environment will promote exploration of novel modeling concepts; more rapid improvement of multi-scale physical, ecological and biogeochemical models; and a stable base for the development of new application services built around a core model framework that can be maintained at the cutting edge of the science. It will also provide a framework for experimentation and rapid implementation of improvements in the parameterization of unresolved processes in oceanic models. The environment will furnish the capability to interchange, combine, and modify choices of vertical coordinate, physical parameterizations, numerical algorithms, parameter settings, and so on. This is in contrast with the usual single model consisting most of the time of a fixed set of parameterizations and algorithms, perhaps with some restricted freedom in the setting of parameters, but with very limited user options to experiment with the model architecture. It is indeed essential to maintain and extend the diversity of available algorithms. The diverse collection of techniques is the gene pool of future oceanic models, and a rich pool provides the best prospect for selecting the models that are optimal for answering specific questions about processes of interest. By comparing the performance of a rich array of configurations, the community will then be able to breed oceanic models that are most skillful at representing the broad assortment of processes important in the simulation of a system as complicated as the ocean. It will also provide an estimate of the model uncertainty by giving an
290
E. P. Chassignet
envelope of solutions resulting from different choices in numerical algorithms; vertical, horizontal, and temporal discretizations; and parameterizations. Acknowledgements╇ As stated in the introduction, a lot of material presented in this chapter relies heavily on articles, notes, and review papers by R. Bleck, S. Griffies, A. Adcroft, and R. Hallberg. I also would like to acknowledge contributions by H. Hurlburt and B. Arbic. The development of the HYCOM ocean prediction system was sponsored by the National Oceanographic Partnership Program (NOPP) and the Office of Naval Research (ONR).
References Adcroft A, Hallberg R (2006) On methods for solving the oceanic equations of motion in generalized vertical coordinates. Ocean Model 11:224–233 Arbic BK, Wallcraft AJ, Metzger EJ (2010) Concurrent simulation of the eddying general circulation and tides in a global ocean model. Ocean Model 32:175–187 Barron CN, Martin PJ, Kara AB, Rhodes RC, Smedstad LF (2006) Formulation, implementation and examination of vertical coordinate choices in the Global Navy Coastal Ocean Model (NCOM). Ocean Model 11:347–375 Barth A, Alvera-Azcárate A, Weisberg RH (2008) Benefit of nesting a regional model into a largescale ocean model instead of climatology. Application to the West Florida Shelf. Cont Shelf Res 28:561–573 Bleck R (1978) Finite difference equations in generalized vertical coordinates. Part I: Total energy conservation. Contrib Atmos Phys 51:360–372 Bleck R (1998) Ocean modeling in isopycnic coordinates. In: Chassignet EP, Verron J (eds) Ocean Modeling and Paramterization. NATO Science Series. Kluwer Academic Publishers, Dordrecht, pp€423–448 Bleck R (2002) An oceanic general circulation model framed in hybrid isopycnic-cartesian coordinates. Ocean Model 4:55–88 Bleck R, Benjamin S (1993) Regional weather prediction with a model combining terrain-following and isentropic coordinates. Part I: Model description. Mon Wea Rev 121:1770–1785 Bleck R, Boudra D (1981) Initial testing of a numerical ocean circulation model using a hybrid (quasi-isopycnic) vertical coordinate. J Phys Oceanogr 11:755–770 Bleck R, Chassignet EP (1994) Simulating the oceanic circulation with isopycnic coordinate models. In: Majundar SK, Mill EW, Forbes GS, Schmalz RE, Panah AA (eds) The oceans: Physical-chemical dynamics and human impact. The Pennsylvania Academy of Science, Easton, pp€17–39 Bleck R, Rooth C, Hu D, Smith LT (1992) Salinity-driven thermocline transients in a wind- and thermohaline-forced isopycnic coordinate model of the North Atlantic. J Phys Oceanogr 22:1486–1505 Burchard H, Beckers JM (2004) Non-uniform adaptive vertical grids in one-dimensional numerical ocean models. Ocean Model 6:51–81 Carnes MR, Fox DN, Rhodes RC, Smedstad OM (1996) Data assimilation in a North Pacific Ocean monitoring and prediction system. In: Malanotte-Rizzoli P (ed) Modern approaches to data assimilation in ocean modeling. Elsevier, New York, pp€319–345 Chassignet EP, Garraffo ZD (2001) Viscosity parameterization and the Gulf Stream separation. In: Muller P, Henderson D (eds) From stirring to mixing in a stratified ocean. Proceedings ‘Aha Huliko’a Hawaiian Winter Workshop. University of Hawaii. 15–19 January 2001, pp€37–41 Chassignet EP, Malanotte-Rizzoli P (2000) Ocean circulation model evaluation experiments for the North Atlantic basin. Dyn Atmos Oceans 32:155–432 (Elsevier Science Ltd., special issue)
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
291
Chassignet EP, Marshall DP (2008) Gulf Stream separation in numerical ocean models. In: Hecht M, Hasumi H (eds) Eddy-Resolving Ocean Modeling. AGU monograph series, American Geophysical Union, Washington, DC, pp€39–62 Chassignet EP, Verron J (1998) Ocean modeling and parameterization. Kluwer Academic Publishers, Dordrecht, p€451 Chassignet EP, Verron J (2006) Ocean weather forecasting: an integrated view of oceanography. Springer, Dordrecht, p€577 Chassignet EP, Smith LT, Bleck R, Bryan FO (1996) A model comparison: Numerical simulations of the North and Equatorial Atlantic Ocean circulation in depth and isopycnic coordinates. J Phys Oceanogr 26:1849–1867 Chassignet EP, Smith LT, Halliwell GR, Bleck R (2003) North Atlantic simulations with the HYbrid Coordinate Ocean Model (HYCOM): Impact of the vertical coordinate choice, reference density, and thermobaricity. J Phys Oceanogr 33:2504–2526 Chassignet EP, Hurlburt HE, Smedstad OM, Halliwell GR, Wallcraft AJ, Metzger EJ, Blanton BO, Lozano C, Rao DB, Hogan PJ, Srinivasan A (2006) Generalized vertical coordinates for eddyresolving global and coastal ocean forecasts. Oceanography 19:20–31 Chassignet EP, Hurlburt HE, Smedstad OM, Halliwell GR, Hogan PJ, Wallcraft AJ, Baraille R, Bleck R (2007) The HYCOM (HYbrid Coordinate Ocean Model) data assimilative system. J Mar Sys 65:60–83 Chassignet EP, Hurlburt HE, Metzger EJ, Smedstad OM, Cummings J, Halliwell GR, Bleck R, Baraille R, Wallcraft AJ, Lozano C, Tolman HL, Srinivasan A, Hankin S, Cornillon P, Weisberg R, Barth A, He R, Werner F, Wilkin J (2009) U.S. GODAE: Global ocean prediction with the HYbrid Coordinate Ocean Model (HYCOM). Oceanography 22(2):64–75 Clarke AJ (2008) An introduction to the dynamics of El Niño & the Southern oscillation. Amsterdam, Elsevier, p€324 Cooper M, Haines K (1996) Altimetric assimilation with water property conservation. J Geophys Res 101:1059–1078 Cornillon P, Adams J, Blumenthal MB, Chassignet EP, Davis E, Hankin S, Kinter J, Mendelssohm R, Potemra JT, Srinivasan A, Sirott J (2009) NVODS and the development of OPeNDAP—an integrative tool for oceanographic data systems. Oceanography 22(2):116–127 Cox MD (1987) Isopycnal diffusion in a z-coordinate ocean model. Ocean Modelling (unpublished manuscripts) 74:1–5 Cummings JA (2005) Operational multivariate ocean data assimilation. Quart J Royal Met Soc 131:3583–3604 Ezer T, Mellor G (2004) A generalized coordinate ocean model and a comparison of the bottom boundary layer dynamics in terrainfollowing and z-level grids. Ocean Model 6:379–403 Fox DN, Teague WJ, Barron CN, Carnes MR, Lee CM (2002) The modular ocean data analysis system (MODAS). J Atmos Ocean Technol 19:240–252 Fratantoni DM (2001) North Atlantic surface circulation during the 1990’s observed with satellitetracked drifters. J Geophys Res 106:22,067–22,093 Gent PR, McWilliams JC (1990) Isopycnic mixing in ocean circulation models. J Phys Oceanogr 20:150–155 Gent PR, Willebrand J, McDougall TJ, McWilliams JC (1995) Parameterizing eddy-induced tracer transports in ocean circulation models. J Phys Oceanogr 25:463–474 Goff JA, Arbic BK (2010) Global prediction of abyssal hill roughness statistics for use in ocean models from digital maps of paleo-spreading rate, paleoridge orientation, and sediment thickness. Ocean Model 32:36–43. doi:10.1016/j.ocemod.2009.10.001 Greatbatch RJ, Mellor GL (1999) An overview of coastal ocean models. In: Mooers CNK (eds) Coastal ocean prediction. American Geophysical Union, Washington, pp€31–57, 526 pages total Griffes SM, Adcroft AJ (2008) Formulating the equations of ocean models. In: Hecht M, Hasumi H (eds) Eddy resolving ocean modeling. Geophysical Monograph Series. American Geophysical Union, Washington, pp€281–318 Griffies SM, Hallberg RW (2000) Biharmonic friction with a Smagorinsky viscosity for use in large-scale eddy-permitting ocean models. Mon Weather Rev 128:2935–2946
292
E. P. Chassignet
Griffies SM, Böning C, Bryan FO, Chassignet EP, Gerdes R, Hasumi H, Hirst A, Treguier A-M, Webb D (2000a) Developments in ocean climate modelling. Ocean Model 2:123–192 Griffies SM, Pacanowski RC, Hallberg RW (2000b) Spurious diapycnal mixing associated with advection in a z-coordinate ocean model. Monthly Weather Rev 128:538–564 Hallberg RW (1995) Some aspects of the circulation in ocean basins with isopycnals intersecting the sloping boundaries, Ph.D. thesis, University of Washington, Seattle, p€244 Hallberg RW (1997) Stable split time stepping schemes for large-scale ocean modelling. J Comput Phys 135:54–65 Hallberg RW (2005) A thermobaric instability of Lagrangian vertical coordinate ocean models. Ocean Model 8:279–300 Hallberg RW, Adcroft A (2009) Reconciling estimates of the free surface height in Lagrangian vertical coordinate ocean models with mode-split time stepping. Ocean Model 29:15–26 Halliwell G (2004) Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid Coordinate Ocean Model (HYCOM). Ocean Model 7:285–322 Halliwell GR Jr, Barth A, Weisberg RH, Hogan P, Smedstad OM, Cummings J (2009) Impact of GODAE products on nested HYCOM simulations of the West Florida Shelf. Ocean Dyn 59:139–155 Hecht MW, Hasumi H (2008) Ocean modeling in an eddying regime. Geophysical monograph series, vol€7. American Geophysical Union, Washington, p€409 Hecht MW, Hunke E, Maltrud ME, Petersen MR, Wingate BA (2008) Lateral mixing in the eddying regime and a new broad-ranging formulation. In: Hecht, Hasumi (eds) Ocean modeling in an eddying regime. AGU Monograph Series. AGU, Washington, pp.€339–352 Hurlburt HE, Chassignet EP, Cummings JA, Kara AB, Metzger EJ, Shriver JF, Smedstad OM, Wallcraft AJ, Barron CN (2008) Eddy-resolving global ocean prediction. In: Hecht M, Hasumi H (ed) “Ocean Modeling in an Eddying Regime”. Geophysical monograph 177. American Geophysical Union, Washington, pp€353–381 Hurlburt HE, Brassington GB, Drillet Y, Kamachi M, Benkiran M, Bourdalle-Badie R, Chassignet EP, LeGalloudec O, Lellouche JM, Metzger EJ, Oke PR, Pugh T, Schiller A, Smedstad OM, Tranchant B, Tsujino H, Usui N, Wallcraft AJ (2009) High resolution global and basin-scale ocean analyses and forecasts. Oceanography 22(3):110–127 Iselin CO (1939) The influence of vertical and lateral turbulence on the characteristics of the waters at mid-depths. Eos Trans Am Geophys Union 20:414–417 Jakobsson M, Cherkis N, Woodward J, Coakley B, Macnab R (2000) A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions. Am Geophys Union 81(9):89, 93, 96 Ledwell JR, Watson AJ, Law CS (1993) Evidence for slow mixing across the pycnocline from an open-ocean tracer-release experiment. Nature 364:701–703 Lee MM, Coward AC, Nurser AJ (2002) Spurious diapycnal mixing of the deep waters in an eddypermitting global ocean model. J Phys Oceanogr 32:1522–1535 Legg S, Chang Y, Chassignet EP, Danabasoglu G, Ezer T, Gordon AL, Griffes S, Hallberg R, Jackson L, Large W, Özgökmen T, Peters H, Price J, Riemenschneider U, Wu W, Xu X, Yang J (2009) Improving oceanic overflow representation in climate models: the Gravity Current Entrainment Climate Process Team. Bull Am Met Soc 90(4):657–670. doi:10.1175/2008BA MS2667.1 Leveque, R.J. (2002) Finite volume methods for hyperbolic problems. Cambridge University Press, Cambridge, p€578 Maltrud ME, McClean JL (2005) An eddy resolving global 1/10_ ocean simulation. Ocean Model 8:31–54 Maximenko NA, Niiler PP (2005). Hybrid decade-mean sea level with mesoscale resolution. In: Saxena N (ed) “Recent Advances in Marine Science and Technology”. PACON International, Honolulu, pp.€55–59 McDougall TJ, Church JA (1986) Pitfalls with the numerical representation of isopycnal and diapycnal mixing. J Phys Oceanogr 16:196–199
11â•… Isopycnic and Hybrid Ocean Modeling in the Context of GODAE
293
Meincke JC, Le Provost, Willebrand J (2001) Dynamics of the North Atlantic Circulation (DYNAMO). Prog Oceanogr 48:N°2–3 Metzger EJ, Smedstad OM, Thoppil P, Hurlburt HE, Wallcraft AJ, Franklin DS, Shriver JF, Smedstad LF (2008) Validation Test Report for Global Ocean Prediction System V3.0— 1/12°HYCOM/NCODA: Phase I, NRL Memo. Report, NRL/MR/7320—08-9148 Montgomery RB (1940) The present evidence on the importance of lateral mixing processes in the ocean. Bull Am Meteor Soc 21:87–94 Oberhuber JM (1993) Simulation of the atlantic circulation with a coupled sea ice-mixed layerisopycnal general circulation model. Part I: model description. J Phys Oceanogr 23:808–829 Papadakis MP, Chassignet EP, Hallberg RW (2003) Numerical simulations of the Mediterranean Sea outflow: Impact of the entrainment parameterization in an isopycnic coordinate ocean model. Ocean Model 5:325–356 Philander SGH (1990) El Niño, La Niña, and the Southern Oscillation. Academic Press, New York, p€293 Redi MH (1982) Oceanic mixing by coordinate rotation. J Phys Oceanogr 12:87–94 Schmitz WJ (1996) On the World Ocean Circulation. Vol.€1: Some global features/North Atlantic circulation. Woods Hole Oceanographic Institute Tech. Rep. WHOI-96–03. p€141 Schopf PS, Loughe A (1995) A reduced-gravity isopycnal ocean model: Hindcasts of El Niño. Mon Wea Rev 123:2839–2863 Shchepetkin AF, McWilliams JC (2005) The Regional Ocean Modeling System (ROMS): A splitexplicit, free-surface, topography-following coordinates ocean model. Ocean Model 9:347–404 Shriver JF, Hurlburt HE (2000) The effect of upper ocean eddies on the non-steric contribution to the barotropic mode. Geophys Res Lett 27:2713–2716 Shriver JF, Hurlburt HE, Smedstad OM, Wallcraft AJ, Rhodes RC (2007) 1/32°real-time global ocean prediction and value-added over 1/16 resolution. J Mar Sys 65:3–26 Smith WHF, Sandwell DT (1997) Global seafloor topography from satellite altimetry and ship depth soundings: evidence for stochastic reheating of the oceanic lithosphere. Science 277:1956–1962 Smith WHF, Sandwell DT (2004) Conventional bathymetry, bathymetry from space, and geodetic altimetry. Oceanography 17:8–23 Song YT, Haidvogel DB (1994) A semi-implicit ocean circulation model using topography-following coordinate. Journal of Computational Physics 115:228–244 Song, YT, Hou TY (2006) Parametric vertical coordinate formulation for multiscale, Boussinesq, and non-Boussinesq ocean modeling. Ocean Model 11:298–332 Stammer D, Wunsch C, Ponte RM (2000) De-aliasing of global high frequency barotropic motions in altimeter observations. Geophys Res Lett 27:1175–1178 Sun S, Bleck R, Rooth CG, Dukowicz J, Chassignet EP, Killworth P (1999) Inclusion of thermobaricity in isopycnic-coordinate ocean models. J Phys Oceanogr 29:2719–2729 van Leer B (1977) Towards the ultimate conservative difference scheme IV: a new approach to numerical numerical convection. J Comput Phys 23:276–299 Veronis G (1975) The role of models in tracer studies. In: Numerical models of ocean circulation. National Academy of Sciences, Washington, pp.€133–146 Wallcraft AJ, Kara AB, Hurlburt HE, Rochford PA (2003) NRL Layered Ocean Model (NLOM) with an embedded mixed layer sub-model: formulation and tuning. J Atmos Oceanic Technol 20:1601–1615 Willebrand J, Barnier B, Böning C, Dieterich C, Killworth PD, Le Provost C, Jia Y, Molines JM, New AL (2001) Circulation characteristics in three eddy-permitting models of the North Atlantic. Prog Oceanogr 48:123–161 Xu X, ChangYS, Peters H, Özgökmen TM, Chassignet EP (2006). Parameterization of gravity current entrainment for ocean circulation models using a high-order 3D nonhydrostatic spectral element model. Ocean Model 14:19–44 Xu X, Chassignet EP, Price JF, Özgökmen TM, Peters H (2007) A regional modeling study of the entraining Mediterranean outflow. J Geophys Res 112:C12005. doi:10.1029/2007JC004145
Chapter 12
Marine Biogeochemical Modelling and Data Assimilation Richard J. Matear and E. Jones
Abstract╇ The inclusion of biogeochemistry into the Global Ocean Data Assimilation Experiment systems represents an exciting opportunity that involves significant challenges. To help articulate these challenges we review marine biogeochemical modeling and the existing applications of biogeochemical data assimilation. The challenges of biogeochemical data assimilation stem from the large model errors associated with biogeochemical models, the computational demands of the global data assimilation systems, and the strong non-linearity between biogeochemical state variables. We use the ocean state estimation problem to outline an approach to adding biogeochemical data assimilation to the Global Ocean Data Assimilation Experiment systems. Our approach allows the biogeochemical model parameters to be spatially and temporally varying to enable the data assimilation system to track the observed biogeochemical fields. The approach is based on addressing the challenges of biogeochemical data assimilation to improve both the state estimation of the biogeochemical fields and the underlying biogeochemical model.
12.1â•…Introduction Marine biogeochemical (BGC) modelling is a key approach to helping us understand the biochemical processes responsible for the transfer of nutrients and carbon between inorganic and organic pools. Quantifying these biochemical processes is essential to understanding carbon cycling in the ocean, the air-sea exchange of carbon, the impact of climate variability and change on marine ecosystems, and the link between ocean physics and ocean biology.
R. J. Matear () CSIRO Wealth from Oceans National Research Flagship, CSIRO Marine and Atmospheric Research, Hobart, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_12, ©Â€Springer Science+Business Media B.V. 2011
295
296
R. J. Matear and E. Jones
Data assimilation represents a new and exciting tool to advance ocean biogeochemical studies. The coupling of physical and biogeochemical data assimilation is a natural evolution of the Global Ocean Data Assimilation Experiment (GODAE). Data assimilation fuses models with a diverse set of observations to provide a more consistent view of the physical and biological state of the ocean. Extending the GODAE effort to include BGC data assimilation presents an exciting opportunity to expand the researchers interested in using ocean data assimilation products and Brasseur et€al. (2009) provides a summary of BGC applications of data assimilation. However, delivering these new BGC data assimilation products is not a trivial task. Here we will focus on the challenges of utilizing the existing physical ocean data assimilation systems to include BGC data assimilation. To help set the context of this discussion we will first briefly review the basic components of the biogeochemical models that could be incorporated into GODAE data assimilation systems. Second, we will briefly discuss previous BGC data assimilation effort. Finally, we discuss the challenges of extending the existing GODAE data assimilation systems to include biogeochemical models to deliver biogeochemical data products. These tasks will be computationally demanding since the GODAE data assimilation systems already required a substantial computing resources and adding BGC to these systems will only further increase these requirements. Therefore, computational constraints will also be a factor to achieving BGC data assimilation.
12.2╅Biogeochemical Modelling Marine biota play an important role in nutrient and carbon cycling in the ocean. Our interest in prognostic marine biogeochemical models arises from the need to better understand, quantify, and eventually predict the oceans role in the global carbon cycle and biological processes involved in the cycling of carbon in the oceans. Here, I will focus on BGC models that involve carbon and nutrient cycling, with a link to the lowest trophic levels of the marine ecosystem. This clearly is not the only application of BGC modelling and other applications include the prediction of harmful algal blooms (Franks 1997), the quantitative understanding of oceanic food webs up to fish (Brown et€al. 2010), and the possible impact of marine sulfur emissions on the formation of cloud condensation nuclei (Gabric et€al. 1998). To articulate what BGC modelling represents, we have chosen to discuss a simple model that describes the nitrogen cycling and its links to the lowest trophic levels of the marine ecosystem. The key processes to capture in the model is the biological uptake of nitrogen as well as the subsequent remineralization of the organic matter back into to an inorganic form. The example BGC model is based on the model that was applied to the macro-nutrient replete region south of Tasmania to investigate phytoplankton dynamics (Kidston et€al. 2010). The BGC model could easily be extended to carbon and other nutrients but even in its simple form it can deliver useful BGC data products like primary productivity, which underpins the whole marine ecosystem (Brown et€al. 2010).
12â•… Marine Biogeochemical Modelling and Data Assimilation
297
The BGC model naturally sub-divides vertically into two domains: the photic zone where light is sufficient to allow phytoplankton photosynthesis and the deeper aphotic zone where there is no photosynthesis. For the example BGC model, the focus is on the photic zone where the evolution of our 4 BGC state variables nitrate (N), phytoplankton (P), zooplankton (Z) and detritus (D) are described by the following equations. For completeness the equations are presented here and for more detail refer to Kidston et€al. (2010). The following model equations describe the evolution of BGC state variables in the mixed layer in mmol€N/m3 where M is the mixed layer depth and Table€12.1 summarizes the BGC model parameters.
dP (m + h+ (t)) = J¯(M, t, N)P − G(P , Z) − µP P − P dt M
(12.1)
dZ h(t) = γ1 G(P , Z) − γ2 Z − µZ Z 2 − Z dt M
(12.2)
dD = (1 − γ1 )G(P , Z) + µP P + µZ Z 2 − µD D dt dD (m + h+ (t)) − wD − D dz M
dN (m + h+ (t)) = µD D + γ2 Z + µP P − J¯(M, t, N)P + (NO − N ) dt M
(12.3) (12.4)
Table 12.1↜渀 Model Parameters of the 0D BGC model. The values come from the optimization in the Subantarctic zone south of Tasmania at the site P1. (Kidston et€al. 2010) Units Parameter Symbol Value Phytoplankton model parameters Initial slope of P-I curve 0.256 day−1/(W€m−2) Photosynthetically active radiation PAR 0.43 – 0.04 m−1 Light attenuation due to water kw a 0.27 day−1 Maximum growth rate parameters b 1.066 – c 1.0 C−1 Half saturation constant for N uptake k 0.7 mmol€N/m−3 Phytoplankton mortality 0.01 day−1 μp Zooplankton model parameters Assimilation efficiency γ1 0.925 – Maximum grazing rate g 1.575 day−1 Prey capture rate 1.6 (mmol€N/m−2)−1day−1 ε Quadratic mortality 0.34 (mmol€N/m−3)−1day−1 μZ Excretion 0.01 bcT day−1 γ2 Detritus model parameters day−1 Remineralisation rate μD 0.048 bcT Sinking velocity 18.0 m/day wD
298
R. J. Matear and E. Jones
The phytoplankton Eq.€ 12.1 includes phytoplankton growth ( J¯(M, t, N )P ), loss due to zooplankton grazing (↜G(↜P, Z)) and phytoplankton mortality ( µP P ), and changes due to ocean physics (last term in Eq.€12.1). Ocean physics includes the dilution of P as the mixed layer depth (↜M) deepens with time (h+ (t) = min(0, dM )) dt while shallowing of M has no impact on P because no new water is added. The m in the ocean physics terms represents the vertical diffusive loss of phytoplankton from the mixed layer. The key environmental drivers of the phytoplankton growth are temperature (↜T), light levels (↜I) and nitrate concentrations (↜N) and the growth rate is given by the following equations.
J¯(M, t, N ) = min J (M, t), Jmax 1 J (M, t) = 24M
24 M 0
0
N N +k
Jmax αI (z, t) 1/2 dzdt Jmax + [αI (z, t)]2
(12.5)
(12.6)
I (z, t) = P AR I (0, t)e−kw z
(12.7)
Jmax = abcT
(12.8)
The light levels are influenced by the daily variation in the incident solar radiation at the surface of the ocean (↜I(0, t)), the fraction of the solar radiation available for photosynthesis (↜PAR), the depth of the mixed layer, and the extinck z tion of light with depth (e− w ). In these equations Jmax is the maximum phytoplankton growth with unlimited light and N, which reduces under low nitrate concentrations. The zooplankton Eq.€12.2 represents the balance between growth due to phytoplankton grazing (↜G(↜P, Z)) and losses due to zooplankton excretions (↜γ2Z) and mortality (↜µZZ2) and ocean physics (last term). The grazing of phytoplankton by zooplankton is given by the following equation
G(P , Z) =
g ∈ P2 P. g + ∈ P2
(12.9)
Only a fraction of the grazed phytoplankton (↜γ1) goes directly into zooplankton growth while the remainder is transferred to detritus. For detritus (Eq.€12.3), there is an input from phytoplankton mortality, and zooplankton grazing and mortality, while detritus decomposition (↜µDD) and sinking (wD dD dz ) provide losses. The last term in the D equation represents ocean physics which is identical to the Eq.€12.1. The detrital sinking term “exports” nitrogen from the model domain. The sinking of detritus into the ocean and subsequent decompo-
12â•… Marine Biogeochemical Modelling and Data Assimilation
299
sition back into it inorganic form (nitrate) creates a vertical gradient in nitrate that increases with depth. For nitrate (Eq.€12.4), phytoplankton uptake provides the loss, detritus remineralization provides a source with an additional external supply due to ocean physics + term ([m+hM (t)] [No − N]), by resupplying the nitrate back into the domain by specifying the nitrate concentration below the mixed layer (↜No). The physical processes supplying sub-surface water to the upper ocean are important to providing nitrate for phytoplankton growth and they are an important link between the physical and biological system. The simple BGC model can be expanded to include multiple nutrients (e.g. carbon, alkalinity, iron, phosphate), phytoplankton (e.g. size and functional dependent), zooplankton and detritus state variables (e.g. Vichi et€al. 2007). The additional BGC state variables add complexity to the model and introduce additional model parameters that must be specified. Hence, there is a compromise between model complexity and information available to constrain the BGC model (Matear 1995). Further the parameterizations in the BGC model often rely on empirical relationships which differs from the ocean physical model who’s governing equations have a strong theoretical connection (e.g. Navier-Stokes equation). Therefore, the trade-off between using a simple versus complex biological model comes down to the questions one is trying to address. The simple BGC model only represents processes occurring in the mixed layer where the fields are well mixed and the system can be configured as a 0D system. The only physical processes included in the BGC model are the vertical supply of nitrate due mixed layer deepening and the vertical mixing between the mixed layer and the deep water. If one wanted to include this BGC model into the GODAE system it would require explicitly resolving the vertical dimension (e.g. Schartau and Oschlies 2003a) and coupling the BGC state variables with the ocean circulation (e.g. Oschlies and Schartau 2005). In such a formation, the evolution of the BGC state variables would be influenced by the biological processes described above and the ocean dynamics which would transport the BGC state variables around the ocean. Although the 3D BGC model would couple the physical and biochemical processes to evolve the BGC state variables it is important to note that within the photic zone generally the biological transformations are much greater than physical influences (e.g. phytoplankton doubling occurs over days where as ocean advection is more like weeks in global ocean models). Therefore within the photic zone over daily time-scales there is the potential to focus on biochemical processes. There are many examples where the BGC model is only applied in a vertical dimension just focusing on processes occurring in the mixed layer. The dominance of the biological processes over the physics fails in high flow regions where horizontal advection can transport the biological pools and this is clearly evident in the ocean colour imagery of Chlorophyll a where the spirally nature of the Chlorophyll a attest to the importance of the ocean dynamics (Fig.€12.1). Below the photic zone, in the aphotic zone the physics and biological processes are both important to the evolution of the BGC state variables and generally it is not possible to neglect either process (Fig.€12.2).
300
R. J. Matear and E. Jones
Fig. 12.1↜渀 A 1-day SeaWiFS ocean colour image of surface Chlorophyll a (mg€ Chla/m3) from western Australia on May 4, 2000. The contour lines denote the sea surface height anomaly (SSHA) field from the same period. Note the two anti-cyclonic eddies (negative SSHA closed contours) connect with the shelf and transporting high Chlorophyll a water off-shore. (Moore et€al. 2007)
12â•… Marine Biogeochemical Modelling and Data Assimilation
301
13
Nitrate
Concentration (mmol N/m3)
12 11 10 9 8 7 6 0
50
100
150
200
250
300
350
Days since Jan 1 0.8
Phytoplankton Zooplankton Detritus
Concentration (mmol N/m3)
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
50
100
150
200
250
300
350
Days since Jan 1
Fig. 12.2↜渀 The seasonal evolution of the 4 state variables of the 0D BGC model for the SAZ-Sense P1 site for the model parameter values given in Table€12.1. Top: Nitrate concentrations, Bottom: Phytoplankton, zooplankton and detritus concentrations all in mmol€N/m3
302
R. J. Matear and E. Jones
12.3â•…Biogeochemical Data Assimilation The application of data assimilation separates into two types of problems: 1. parameter estimation and 2. state estimation. The two approaches reflected different philosophies on how to fuse the BGC model with the observations with both delivering useful but different information. In both approaches, data is used to constrain the evolution of the state variables but with parameter estimation it is the model parameters that are modified to fit the constraints while in the state estimation the state variables are modified to fit the observations. The state estimation approach generally forms the foundation of GODAE physical data assimilation with such effort delivering either ocean forecast or reanalysis products of the time evolving 3-D ocean physical state. However, BGC data assimilation studies have tended to focus on data assimilation for parameter estimation. Although a complete description of the two approaches is beyond the scope of this paper, the following is a brief summary of the application of these two data assimilation approaches.
12.3.1 Parameter Estimation An obvious feature of the simple BGC model discussed in Sect.€12.2 is the large number of model parameters that must be specified to simulated the BGC. The setting of these parameters can be partially accomplished from observations but for many parameters no direct observation estimate is available. Further, even for the more observable parameters like the parameters that control phytoplankton growth (e.g. α and Jmax) much uncertainty still exists because either the parameter values change with time or the values determined from the individual phytoplankton species may not be applicable to the entire ecosystem. The inability to directly specify all the model parameters forces one to determine the parameters values by tuning the model to reproduce the observations. This is a tedious and time consuming approach. The attraction of data assimilation is that it provides a means to generate a set of model parameters that reflects the observations, determines the values of the poorly known parameters, and provides insight into which parameters are constrained by the model (Kidston et€al. 2010; Schartau and Oschlies 2003b). There is now a long list of studies which have used data assimilation methods to estimate model parameters of ecosystem models (see Gregg et€al. 2009 for a list of these studies). There is even a webpage setup to explore parameter estimation of a suite of different BGC models at a number of different sites (http://www.ccpo.odu. edu/marjy/Testbed/Workshop1.html). A recent model comparison study highlights some of the issues using BGC model (Friedrichs et€al. 2007). In their study, they assessed the ability of 12 different BGC models of varying complexity to fit observations from two different sites.
12â•… Marine Biogeochemical Modelling and Data Assimilation
303
They showed that all models performed equally well when calibrated at one site but only models with multiple plankton state variables were able to have the same model parameters for both sites. The study emphasizes the need for spatial variability in the model parameters to account for the different ecological regimes. Consequently, no one unique set of model parameters exists for the included ecosystem components such as phytoplankton for the entire ocean and flexibility in the model parameters needs to be incorporated in the model application. The uncertainty in the biological model is not just in the model parameters but extends to the choice of equations used to describe the biological system (Franks 2009). Both parameter and model formulation uncertainty introduce large model error in BGC modelling, which lends itself to parameter estimation studies which explore the parameter space, model complexity and model formulation (e.g. Matear 1995). The ability to address all three of these issues demonstrates the value in the parameter estimation approach for BGC data assimilation. In addition, parameter estimation can provide insight into what observations are critical to building and constraining more realistic models, and identifying the critical model parameters required to reproduce the observations (Kidston et€al. 2010). The latter result provides a convenient way to identify a subset of critical model parameters, which capture the key observed dynamics of the biological system, and then explore how these parameters affect the dynamics of the system (e.g. Friedrichs et€al. 2006). One important aspect to note with the parameter estimation approach is that unrealistic parameter values may be estimated because important processes are excluded from the model formulation (these are call structural errors in the model). Therefore, the estimated model parameter values must be ecologically assessed and deemed plausible otherwise the formulated model has structural errors.
12.3.2 Ocean State Estimation The attraction of state estimation data assimilation is that it provides a way to incorporate both physical and biological observations into the numerical models to obtain the evolution of BGC fields that are dynamically consistent with the observations and provide a tool to extend the observations in both space and time (Lee et€al. 2009). The application of state estimation is to overcome limitations in the model by correcting the ocean state to produce a more realistic evolution of the ocean state (Natvik and Evensen 2003a). The approach provides a way of limiting the impact of model errors (parameter, formulation, initialization and forcing) to better hindcast and forecast the ocean state. The study of Gregg (2008) provides a nice review of sequential BGC data assimilation studies which I briefly summarize here. Not surprising the focus of these studies is on assimilating ocean colour surface Chlorophyll a concentrations into their BGC models since this data product provides the best spatial and temporal data of the ocean biological system.
304
R. J. Matear and E. Jones
The first example of sequential data assimilation directly inserted CZCS chlorophyll into a 3-dimensional model of the southeast US coast (Ishizaka 1990). They produced immediate improvements in their chlorophyll simulation but these improvements did not last more than a couple of days before the model simulation diverge from the observations. The divergence of the model simulation from the observations reflected a bias in the biological model to overestimate Chlorophyll a. Correcting such a bias would be crucial to obtaining better and longer lasting state estimation results. More recently, the Ensemble Kalman Filter (EnKF) has been used to assimilate SeaWiFS ocean color Chlorophyll a data into a 3D North Atlantic model (Natvik and Evensen 2003a, b). They showed the EnKF updated ocean state was consistent with both the observed phytoplankton and nitrate concentrations. However, this study did not show any comparison to unassimilated data to enable a quantitative assessment of the impact BGC state estimation (Gregg et€al. 2009). Subsequently, Gregg (2008) used the Conditional Relaxation Analysis Method (CRAM) to sequential assimilate multi-year SeaWiFS data into a 3D BGC model. Not surprisingly they improved surface Chlorophyll a estimates since this was the state variable being assimilated. A more independent assessment was made by looking at the impact of data assimilation on the simulated depth-integrated primary production which showed a much more modest improvement with data assimilation compared to the improvement in Chlorophyll a. Recently, Hemmings et€al. (2008) presented a nitrogen balance scheme with the aim of assimilating ocean color Chlorophyll a to improve estimates of the seawater pCO2. They used 1D simulations at two sites in the North Atlantic (30N and 50N) to assess the performance of their scheme. The scheme exploits the covariance between Chlorophyll a and the other biological state variables at a fraction of the computational cost of the multivariate EnKF scheme. To do this Hemmings et€al. (2008) use 1D model simulations with varying model parameters to extract the relationship between simulated Chlorophyll a and the other biological variables. This information is then used to project the assimilated Chlorophyll a data on to all biological state variables. They show the nitrogen balancing approach improves the ocean pCO2 simulation over the case where only phytoplankton and DIC are updated. At 30N, RMS error of the surface pCO2 was reduced by more than 50% (4€µÂ€atm to less than 2€µÂ€atm). While at 50N, the surface pCO2 RMS error showed little improvement but the bias in the field was reduced from −2€µÂ€atm to nearly zero. At present, the application of state estimation has been confined to utilizing remotely sensed ocean colour Chlorophyll a. To extend this information from the surface into the ocean interior will required coupled biological–physical models. Further, many of the colour images are corrupted by clouds and filling these data gaps is another obvious outcome of state estimation. Finally, it quite possible that the fields that we are most interested in are not sufficiently observed to provide the spatial and temporal coverage desired (e.g. pCO2). For these cases, data assimilation provides tool to exploit the existing observations to generated the fields of interest.
12â•… Marine Biogeochemical Modelling and Data Assimilation
305
12.4â•…Challenges of Adding BGC Data Assimilation to GODAE Systems 12.4.1 Background The Global Ocean Data Assimilation Experiment (GODAE) was conceived to provide a 3D depiction of global ocean circulation at eddy resolution through data assimilation that was consistent with physical fields and dynamical constraints (Lee et€ al. 2009). The expansion of the GODAE state estimation to BGC fields is a natural evolution that exploits the GODAE effort to provide BGC state estimates consistent with both the physical and biological information(Brasseur et€al. 2009). GODAE effort involves three streams: (1) mesoscale ocean analysis and forecast, (2) initialization of seasonal-interannual prediction, and (3) state estimation (reanalysis) products (Lee et€al. 2009). In discussing the issues of including BGC into the GODAE effort the focus is on how to modify the state estimation problem. A range of assimilation methods have been applied by various groups to produce the ocean state estimation, ranging from adjoint to sequential methods. The adjoint method (e.g. MOVE from Japan and the ECCO group (www.eccogroup.org)) (Lee et al. 2009) is analogous to the parameter estimation approach discussed earlier but implemented such that the ocean state is obtained by optimizing not only the model parameters but also control variables such as the initial state of the ocean and the surface forcing. Although the adjoint-based estimation products are characterized by consistency with the physical model equations they required the formulation of an “adjoint model” to do the data assimilation. The development of the adjoint of the couple physical biological model is possible (e.g. Matear and Holloway 1995; Schlitzer 2002) but it is not a trivial task. For this reason, it is more realistic to expect the first GODAE systems to include BGC to be based on sequential data assimilation methods and the focus of the following discussion is on how to get BGC into this type of data assimilation system. The sequential methods as implemented by various ocean reanalysis groups (e.g. Bluelink from Australia (Oke et€al. 2008); FOAM from the United Kingdom (Martin et€al. 2007); Mercator from France (Brasseur et€al. 2005)) are typically computationally more efficient than the adjoint method. The sequential approach allows the estimated state to deviate from an exact solution of the underlying physical model by applying statistical corrections to the ocean state. Such corrections act as internal sources/sinks of heat, salt, and momentum. Interested readers should read Zaron (this issue) for more a thorough presentation of the sequential data assimilation. To apply sequential data assimilation one needs information on how observations of a state variable projects onto the other state variables at that time, that includes all state variables for all model grid points. This information is referred to as the multivariate Background Error Covariances (BECs) and is calculated from the anomalies in the state variable evolution in an unassimilating model. In practice the BECs are estimated from either an ensemble of simulations with different initial values of the state variables (Brasseur et€al. 2005) or computed from a multi-year run of just
306
R. J. Matear and E. Jones
one simulation (Oke et€al. 2008). The benefits to using BECs are: They reflect the length-scales and the anisotropy of the ocean circulation in different regions. They provide the information on the covariances between different state variables in a dynamically consistent way (we use the term dynamically consistent to describe an ocean state that can be generated by the model). Finally they are easily generalized to assimilate different observation types in a single step. Therefore, one might expect the approach could easily be generalized to include BGC information. There have been a number of attempts to do this (Natvik and Evensen 2003a, b). We will now explore these issues using our simple BGC model a pedagogical tool to evaluate whether BECs provide a suitable avenue for BGC state estimation data assimilation.
12.4.2 Potential Issues and Solutions A key data set for biogeochemical state estimation is ocean colour Chlorophyll a hence the following discussion will focus on how constraints on phytoplankton concentrations affect the BGC state variables. Using BECs appears an obvious way to implement BGC data assimilation, however the BGC model uncertainty and biases can have a huge impact on the estimated BECs and the resulting data assimilation state estimation. For 3D BGC modelling, the calculation of the BECs poses a challenge since the biochemical processes controlling the flows between state variables are uncertain. This uncertainty is reflected in model parameter uncertainty, model parameters evolving with time, model formulation uncertainty and errors in the model structure (i.e. model simplification and missing processes). Capturing this uncertainty with an ensemble approach (e.g. EnKF) will be computationally demanding for a 3D BGC model because it will be difficult to include a sufficient number of ensemble members to account for all the BGC model uncertainties. The EnOI method (Brassington et€al. 2007) provides a more efficient calculation of the BECs for the physical state variables by using statistics generated from a 9-year run of the unassimilating ocean model. Although computationally doable with the 3D BGC model, the BECs calculated from this method are not time-varying, which is probably not appropriate for BGC state variables since their present state, e.g. whether phytoplankton are growing or not, alters the BECs (Hemmings et€al. 2008). BECs generation within the 3D ocean circulation model at present is not computationally feasible for GODAE systems, however if we are mainly interested in the BGC in the photic zone there is the potential to treat the 3D BGC model as a set of 0D mixed layer representation for each ocean surface grid point of the domain. The evolution of the BGC fields in the mixed layer is generally controlled by BGC processes and applying this domain decomposition is often a reasonable assumption. Examples of such an approach include most of the parameter estimation studies referred to in the previous section. From the oceans dynamics perspective the key physical processes driving the BGC fields are the vertical supply of nutrient to
12â•… Marine Biogeochemical Modelling and Data Assimilation
307
the photic zone and the availability of light in the photic zone. Both these processes were incorporated into the simple BGC model presented earlier. The key physical information needed for the 0D BGC model is the vertical velocity, the vertical mixing rate, and the temporal evolution of the mixed layer depth and temperature. These are already standard state variables generated by GODAE data assimilation systems therefore they are available to be used by our 0D BGC model. For the BGC data assimilation additional assessments of these physical products are needed given their importance to the BGC model behaviour but this is best pursued through improvements in the physical data assimilation system. With the rapidly growing number of Argo temperature and salinity profiles the observational information to assess these fields and improve them physical data assimilation systems provides a clear way forward. By focusing the calculation of the BECs on just the 0D BGC model of the photic zone, generating the BECs from an ensemble of simulations becomes computationally doable. But what should the BECs look like? Given the non-linearity of the BGC model we expect a complex relationship between the BGC state variables. The Hemmings et€al. (2008) nitrogen balancing approach was based on developing mechanistic links between the different BGC state variables. Although they discuss the BECs complexity and time dependent nature it is instructive to review this issue with our simple 0-D BGC model.
12.4.3 Pilot BGC Data Assimilation Using BECs The BGC model described in Sect.€12.3 was used to explore to explore the relationships between the BGC state variables. To compute the BECs, we perform 100 random perturbations of the initial conditions of these state variables on January 1. The perturbations to the initial state were gaussian with a standard deviations of 0.01€mmol€N/m3 and correct to have a mean of zero to ensure nitrogen conservation. The simulations, which occur during the growth phase of the phytoplankton, reveals several important features of the BGC model and its BECs. The random perturbations of the initial BGC state variables do not cause a long term change in the BGC state variable trajectories and all perturbations return back to the original trajectory of the unperturbed model with an e-folding time-scale of about 25 days (Fig.€12.3). This reflects a bias for the BGC state variables to follow the same trajectory as the unassimilated model. The bias in the model behaviour could possibly be reduced if we first used data assimilation to estimate the model parameters from the observations prior to tackling the sequential data assimilation. However, we would never expect to account for all the model biases everywhere in the model domain and further we want the data assimilation to help correct these deficiency to produce a more realistic model behaviour. The decay in perturbations BGC state variables involves a damped oscillation which reflects the unbalanced nature of the initial perturbations. For a non-linear model like our 0D BGC model, it is common to generate an unbalanced state when the state variables are updated, which gener-
308
R. J. Matear and E. Jones
&RQFHQWUDWLRQPPROP
1LWUDWH 3K\WRSODQNWRQ =RRSODQNWRQ 'HWULWXV
&RQFHQWUDWLRQPPROP
D
'D\VVLQFH-DQ
1LWUDWH 3K\WRSODQNWRQ =RRSODQNWRQ 'HWULWXV
±
±
E
'D\VVLQFH-DQ
Fig. 12.3↜渀 For the 0D BGC model at the SAZ-Sense P1 site: a the standard deviation in the BGC state variable anomalies generated from an ensemble of 100 simulations obtained by randomly perturbing the initial values on January 1 by ±0.01€mmol€N/m3. b Evolution the BGC state variable anomalies for just one of the ensemble member. The anomalies in the BGC state variables are define as the difference in the state variables from the solution given in Fig.€12.2
ates an undesirable response in the model and this condition should be avoided. The simulated relationship between phytoplankton (P) and other state variables is also complex (Fig.€ 12.4) with clear negative relationship with nitrate (N) but no obvious one with zooplankton (Z) and detritus (D) (not shown). The negative phytoplankton-nitrate relationship is the phytoplankton growth dependent response where more phytoplankton equates to greater phytoplankton growth and increased nitrate uptake and reduced nitrate concentrations and vice versa. However, the phytoplankton-zooplankton relationship does not appear to reveal any pattern. The link between P and Z becomes clearer when we plot the phase diagram of the 100 per-
12â•… Marine Biogeochemical Modelling and Data Assimilation
309
Phytoplankton anomaly (mmol N / m3)
0.012 0.008 0.004 0.000 – 0.004 – 0.008 – 0.012 – .008
a
0.000 – .004 Nitrate anomaly (mmol N / m3)
0.004
0.008
Phytoplankton anomaly (mmol N / m3)
0.012 0.008 0.004 0.000 – 0.004 – 0.008 – 0.012
b
– 0.02
– 0.01 0.01 0.00 Zooplankton anomaly (mmol N / m3)
0.02
Fig. 12.4↜渀 On January 7 from the 100 member ensemble simulations, the relationship between the anomaly in phytoplankton concentrations with the anomaly in nitrate (a), and zooplankton (b) concentrations. The anomalies in the BGC state variables are define as the difference in the state variables from the solution given in Fig.€12.2
turbations (Fig.€12.5). Because of the lag between Z response to P perturbation, we miss the strong connection between the two state variables. The lag between P and Z creates the unbalanced response of the BGC model to random perturbations, which causes limit cycles in our model simulations. Capturing this time-evolving connection between Z to P is not trivial and further it would be dependent on the choice of model parameter values and the BGC state prior to the perturbation. For all the issues discussed above the use of the BECs for the BGC data assimilation is complicated and not an attractive way to tackle BGC state estimation. How should we do it? By focusing on one simulation where we increase the P concentration by 0.01€mmol€N/m3 and observe how the system evolves can help us develop an ap-
310
R. J. Matear and E. Jones
Phytoplankton anomaly (mmo l/ m3)
0.02 0.01 0.00 – 0.01 – 0.02 – 0.02
– 0.01 0.00 0.01 Zooplankton anomaly (mmol / m3)
0.02
Fig. 12.5↜渀 The phase relationship between phytoplankton and zooplankton anomaly concentrations from the ensemble of simulations for 50 days after initialization. Each asterisk is a day and line is an ensemble member. The anomalies in the BGC state variables are define as the difference in the state variables from the solution given in Fig.€12.2
proach to exploiting the P observations more effectively. This simulation is analogous to the situation where the BGC model simulation under-estimates P and we want the model to behave more like the observed value. Under such a state perturbation within a few days the perturbed phytoplankton concentration rapidly returns to the pre-nudge value and little is gained from the assimilation with the exception we have generated an unbalance perturbation which generates an oscillation in the BGC state variables that persists for many more days (Fig.€12.6a). The unbalanced nature of the state update is most apparent in the fields not directly assimilated, for example Z shows much greater variability than P. Without continual data assimilation we would not maintain a realistic P and one would continue to produce spurious behaviour in the other BGC state variables. However, for this site where nitrate is always in excess (Fig.€12.2) the more direct way to control P is through its loss to grazing. By reducing grazing by 10% (Fig.€12.6b) a much more sustained phytoplankton response is achieved and the response of the other BGC state variables lack the spurious variability because the change in the BGC state is balanced. Such a modification of the BGC model can be justified on the grounds that the grazing parameter is uncertain. The modification to grazing is correcting the model’s under-estimate of P by altering the underlying equations rather than just changing P to reflect the observations. Changing the grazing rate alters the trajectory of the model simulation leading to a persistent increase in P, Z, D and a persistent decrease in N (Fig.€12.6b). The persistent nature of the changes in the BGC state variables is a desirable feature since it demonstrates that “assimilating” P can have a lasting impact on the model simulation and thus avoid the situation where data assimilation only provides a short term improvement to the BGC state evolution (e.g. Ishizaka 1990).
12â•… Marine Biogeochemical Modelling and Data Assimilation
311
Anomaly Concentration (mmol N / m3)
0.01 0.008 0.006 0.004 0.002 0
– 0.002 – 0.004
a
0
5
0
5
10
15
20 25 Days since Jan 1
30
35
40
0.06
Anomaly Concentration (mmol N/m3)
0.05 0.04 0.03 0.02 0.01 0 – 0.01 – 0.02
b
10
15
20 25 Days since Jan 1
30
35
40
Fig. 12.6↜渀 a 0D BGC model evolution of the state variable anomalies for 0.01€mmol€N/m3 additional to the phytoplankton concentration. b State variable anomalies when the zooplankton grazing rate on phytoplankton is decreased by 10%. In the two figures, the BGC state variable anomalies are denoted by nitrate (↜dark blue), phytoplankton (↜cyan), zooplankton (↜yellow), and detritus (↜red) all in units of mmol€N/m3. The anomalies in the BGC state variables are define as the difference in the state variables from the solution given in Fig. 12.2
312
R. J. Matear and E. Jones
Now I accept this is a contrived example where the model has an obvious bias but this is probably always expected given model parameter and formulation uncertainties. In previous twin experiments where the sequential data assimilation was applied to the BGC state variables and judged to be a success it must be emphasized that model generated observations are used in the data assimilation therefore they are unbiased with respect to the model used to do the data assimilation (Eknes and Evensen 2002). Therefore the impact of model bias is omitted in their assessment of the success of the data assimilation, which would not be the case in a real world application. The simple example highlights the benefits of perturbing the model parameters and Dowd (2007); Mattern et€al. (2010); Jones et€al. (2010) all provide more extensive examples of how to perturb the underlying model parameters to reproduce the observed state evolution of the BGC fields. These balanced methods (i.e. only the model parameters are changed) deliver estimates of parameters concurrently with the model state without the assumption that model parameters are constant in time. Using the time evolving parameter values to alter the BGC state variables to fit the observations has ecological meaning since we expect biological processes controlling these parameter values to be spatially and temporally varying. Ideally, for the GODAE BGC data assimilation we would like to carry the ensemble of different model parameters like in the 1D studies by Dowd (2007); Mattern et€al. (2010); Jones et€al. (2010) into the 3D BGC model but the computational overhead of running multiple BGC models is not possible. Hence, one will need to reduce the potential parameters to just one parameter set. Perhaps we can make some ecological defensible choices in how the model parameters evolve in time and space in a way that is consistent with model parameter uncertainty and uncertainty in the observations.
12.4.4 P roposed Sequential Data Assimilation for BGC State Estimation In the previous section, I discussed several issues in applying state estimation for BGC fields. I would now like to propose an approach to incorporating the BGC into the state estimation of a GODAE system. The following is the proposed sequence of steps to do sequential BGC state estimation, which is also outlined in Fig.€12.7. 1. convert the observed remotely sensed ocean colour Chlorophyll a concentration into an estimate of surface phytoplankton concentrations to provide the observational constraint for the BGC data assimilation. Note, that an alternative would be to included Chlorophyll a as a state variable in the BGC model and included in the BGC model a formulation for the N:Chlorophyll a ratio of phytoplankton (e.g. Hemmings et€al. (2008)). 2. using the seasonal climatology of derived phytoplankton observations apply parameter estimation to the 0D BGC representation of the 3D BGC model at
12â•… Marine Biogeochemical Modelling and Data Assimilation 3D – BGC P0
PD1 (observation) P1 (4)
(4)
BGC2 + BGC´2
(5) P*1 = PD1 - P1
(5) P*2
(7) m; BGC´1 (6) BGC0 0D – BGC
PD2 (observation) P2
BGC1 + BGC´1
BGC0
313
(7) m; BGC´2
(6) BGC1 + BGC´1
BGC = (N, P, Z, D) Observation
Fig. 12.7↜渀 Proposed implementation of the BGC data assimilation involves simulations with both 3D and 0D BGC models and their interaction. In the figure, time increases from left to right, the top of the figure denotes the integration of the 3D model, the bottom denotes the integration of the 0D model, and the two black arrows denote the exchange of information between the two models. The observations are times when data is available to be assimilated, which in this example is only phytoplankton concentrations, the black text denotes information from the 3D model and the blue text denotes information from the 0D model. The numbers reflect the steps outlined in the text, which involve: (4) forward simulation of the 3D BGC model (cyan arrow) from an initial state BGC0 to the next observation (time 1), where BGC1 is the 3D BGC state at time 1. (5) at every ocean grid point where the 0D BGC optimization will be applied compute (P*) the difference between the observed phytoplankton concentration (PD) and the value simulated by the 3D model (P). (6) Inversion step, which uses the 0D BGC model to obtained new BGC model parameters (red double arrow) from target value for P. The target value for P is determine by adding P* to the P value at time 1 obtained from the 0D simulation initialized with the 3D BGM values (BGC0) and the original model parameters from the 3D model. The approach attributes P* only to modification in the BGC model parameters. The inversion provides a revised estimate of the model parameters (m) along with anomaly correction to the BGC state at time 1 (BGC′1). The anomaly correction is computed from 0D BGC simulations as the difference between the 0D BGC state at time 1 obtained with the original model parameters used in the 3D BGC simulation and 0D BGC state at time 1 obtained with the optimize model parameters (m). (7) update the BGC state variables (BGC1 + BGC′1) and the model parameters (m) of the 3D BGC model and integrate the 3D BGC model to the next observation time (time 2); repeat steps 5–7
a number of locations in the model domain to acquire a first guess at the range of acceptable BGC model parameter values. The estimated model parameters can be used as the initial values of the 3D BGC model. More important than the estimated model parameters, the data assimilation application will identify the subset of model parameters crucial to the evolution of the BGC fields. As shown by Kidston et€al. (2010) this is expected to be a small subset of the model parameters. By identifying the crucial model parameters one would then only modify these parameters in the following application of sequential data assimilation 3. spin-up the unassimilating 3D BGC model using the initial BGC model parameter set determined from 2. The spin-up period will take several simulated years to get the mixed layer to a seasonally stable state. This BGC state will be used as
314
R. J. Matear and E. Jones
the initial state for applying the BGC data assimilation. In running the unassimilating 3D BGC model the deeper nitrate values could be relaxed to the climatological observations to tailor the data assimilation application to BGC behavior in the photic zone and ensure a realistic sub-surface nitrate concentrations. 4. from an initial state run the 3D BGC model forward one day and at each ocean surface grid point compute the difference between the simulated phytoplankton concentration and the observed value. 5. at each ocean surface model grid point run an ensemble of 0D BGC simulations. The 0D BGC model will use the ocean physical information and the initial BGC values from step 4. From the ensemble of simulations the mean and uncertainty in the subset of model parameters identified in step 2 is determined that produce a correction to phytoplankton concentrations consistent with the difference estimated in step 4. The difference between the observed and simulated phytoplankton concentration from the 3D BGC model is assumed only to be due to BGC processes in the mixed layer. Note the 0D BGC model does retain the 3D ocean circulation effects since they are included in the difference between the 3D BGC simulated P and the observed value. Provided the modifications to the BGC state variables are small, the ocean circulation affects will be accurately represented in the 0D BGC model. The estimated BGC model parameters mean and uncertainty should be retained for later analysis. 6. from the spatially varying mean model parameter values determined in 5, run the 0D BGC model at all surface ocean grid points over the same day to estimate the mixed layer depth corrections to the BGC state variables. 7. add the corrected BGC state variables to the mixed layer values at the end of the one day run of the 3D BGC model and change the 3D BGC model parameters to the values determined in 5. 8. repeat steps 4–7 assimilating the next day of observed surface phytoplankton concentrations. Within this system there will also be data assimilation of the physical system, which will alter the physical ocean state. The altered physical ocean state will be incorporated into the forward running of the 3D BGC model and into the 0D BGC simulations at the surface ocean grid points. It should be noted that the data assimilation updating of the physical state variables may produce an unbalanced physical state, which may cause problems in the BGC simulation. This should be investigated by running the model system but not applying BGC data assimilation to explore the impact of update the physical state on the BGC state variable evolution. Although we suggest above to use the mean BGC model parameters determined from 0D BGC simulation a more sophisticated approach could be envisage where uncertainty in the observed phytoplankton concentration, temporal and spatial variability in the model parameters and their uncertainty could be incorporated into the revised estimate of the model parameters used in the next iteration of the 3D BGC model. For example, the study by Jones et€al. (2010) show how to limit the temporal variability in the model parameters. Finally the temporal and spatial evolution of the optimized BGC model parameters determined in step 4 also provide independent
12â•… Marine Biogeochemical Modelling and Data Assimilation
315
information to assess the ecological realism of the BGC model and the ability of the data assimilation to extract additional from the observations beyond constraining the surface phytoplankton. The analysis of the optimized parameters should provide useful insight into the BGC model formulation. For example, we expect the model parameters to display spatial variability (Friedrichs et€al. 2007; Follows et€al. 2007) related to ecological regimes and the updated model parameters could be assessed against the expected ecological regimes in the ocean.
12.5╅Conclusion The field of BGC data assimilation is a relatively new but there are now many examples where the approach has been applied to both parameter estimation and state estimation problems. Data assimilation with BGC models provides a framework to extract information from BGC observations and refine prognostic models of carbon and nutrient cycling in the ocean. The existing GODAE data assimilation systems are an obvious avenue for expanding data assimilation to include BGC. Large BGC model uncertainties, strong non-linearity in the BGC model and high computational demands of the existing GODAE sequential data assimilation systems motivated us to propose a hybrid BGC data assimilation approach. The proposed approach utilizes the vertical information of the physical model and an ensemble of simulations of the 0D BGC representation of the 3D BGC model at each surface ocean grid point. From the 0D BGC ensemble of simulations one obtains an updated estimate of the BGC model parameters and revised BGC ocean state to use in the subsequent simulation of the 3D BGC model. As conceived the approach is computationally feasible, provides a way to estimate the BGC ocean state that is BGC balanced without resorting to BECs that are complex and difficult to determine. The application will generate spatially and temporally varying BGC model parameters, which will need to be ecologically evaluated. Future effort with the BLUELINK data assimilation system Oke et€al. (2008) will be pursued using this approach to deliver 3D ocean state estimates of both the physical and BGC fields. Our presentation has focused on modifying the GODAE system to include BGC but there may be value in constraining the physical data assimilation system with the remotely sensed ocean colour Chlorophyll a. As shown in Fig.€12.1, the field contains information about the eddy circulation in the surface ocean. Extracting such information may prove valuable and should be explored.
References Brasseur P, Bahurel P, Bertino L, Birol F, Brankart JM, Ferry N, Losa S, Remy E, Schroeter J, Skachko S, Testut CE, Tranchant B, Leeuwen PJV, Verron J (2005) Data assimilation for marine monitoring and prediction: the MERCATOR operational assimilation systems and the MERSEA developments. Q J R Meteorol Soc 131(613):3561–3582
316
R. J. Matear and E. Jones
Brasseur P, Gruber N, Barciela R, Brander K, Doron M, Moussaoui AE, Hobday AJ, Huret M, Kremeur A-S, Lehodey P, Matear R, Moulin C, Murtugudde R, Senina I, Svendsen E (2009) Integrating biogeochemistry and ecology into ocean data assimilation systems. Oceanography 22(3):206–215 Brassington GB, Pugh T, Spillman C, Schulz E, Beggs H, Schiller A, Oke PR (2007) BLUElink development of operational oceanography and servicing in Australia. J Res Pract Inf Tech 39(2):151–164 Brown CJ, Fulton EA, Hobday AJ, Matear RJ, Possingham HP, Bulman C, Christensen V, Forrest RE, Gehrke PC, Gribble NA, Griffiths SP, Lozano-Montes H, Martin JM, Metcalf S, Okey TA, Watson R, Richardson AJ (2010) Effects of climate-driven primary production change on marine food webs: implications for fisheries and conservation. Glob Change Biol 16:1194–1212, doi: 10.1111/j.1365-2486.2009.02046.x Dowd M (2007) Bayesian statistical data assimilation for ecosystem models using Markov Chain Monte Carlo. J Mar Syst 68(3–4):439–456 Eknes M, Evensen G (2002) An Ensemble Kalman filter with a 1-D marine ecosystem model. J Mar Syst 36(1–2):75–100 Follows MJ, Dutkiewicz S, Grant S, Chisholm SW (2007) Emergent biogeography of microbial communities in a model ocean. Science 315:1843–1846 Franks PJS (1997) Models of harmful Algal Blooms. Limnol Oceanogr 42(5):1273–1282 Franks PJS (2009) Planktonic ecosystem models: perplexing parameterizations and a failure to fail. J Plankton Res 31(11):1299–1306 Friedrichs MAM, Hood RR, Wiggert JD (2006) Ecosystem model complexity versus physical forcing: quantification of their relative impact with assimilated Arabian Sea data. Deep-Sea Res Part I i-Topical Stud Oceanogr 53(5–7):576–600 Friedrichs MAM, Dusenberry JA, Anderson LA, Armstrong RA, Chai F, Christian JR, Doney SC, Dunne J, Fujii M, Hood R, McGillicuddy DJ, Moore JK, Schartau M, Spitz YH, Wiggert JD (2007) Assessment of skill and portability in regional marine biogeochemical models: role of multiple planktonic groups. J Geophys Res-Oceans 112(C8):C08001 Gabric AJ, Whetton PH, Boers R, Ayers GP (1998) The impact of simulated climate change on the air-sea flux of dimethylsulphide in the subantarctic Southern Ocean. Tellus Ser B-Chem Phys Meteorol 50(4):388–399 Gregg WW (2008) Assimilation of SeaWiFS ocean chlorophyll data into a three-dimensional global ocean model. J Mar Syst 69(3–4):205–225 Gregg WW, Friedrichs MAM, Robinson AR, Rose KA, Schlitzer R, Thompson KR, Doney SC (2009) Skill assessment in ocean biological data assimilation. J Mar Syst 76(1–2):16–33 Hemmings J, Barciela R, Bell M (2008) Ocean color data assimilation with material conservation for improving model estimates of air-sea CO2 flux. J Mar Res 66:87–126 Ishizaka J (1990) Coupling of coastal zone color scanner data ot a physical-biological model of the southeastern U. S. continental shelf ecosystem 3. Nutrient and phytoplankton fluxes and CZCS data assimilation. J Geophys Res 95:20201–20212 Jones E, Parslow J, Murray L (2010) A Bayesian approach to state and parameter estimation in a Phytoplankton-Zooplankton model. Aust Meteorol Ocean 59:7–15 Kidston M, Matear RJ, Baird M (2010) Exploring the ecosystem model parameterzation using inverse studies. Deep-Sea Res Part II Lee T, Awaji T, Balmaseda MA, Greiner E, Stammer D (2009) Ocean state estimation for climate research. Oceanography 22(3):160–167 Martin AJ, Hines A, Bell MJ (2007) Data assimilation in the FOAM operational short-range ocean forecasting system: a description of the scheme and its impact. Q J R Meteor Soc 133(625):981–995 Matear RJ (1995) Parameter optimization and analysis of ecosystem models using simulated annealing: a case study at Station P. J Mar Res 53:571–607 Matear RJ, Holloway G (1995) Modeling the inorganic phosphorus cycle of the North Pacific using an adjoint data assimilation model to assess the role of dissolved organic phosphorus. Glob Biogeochem Cycles 9:101–119
12â•… Marine Biogeochemical Modelling and Data Assimilation
317
Mattern JP, Dowd M, Fennel K (2010) Sequential data assimilation applied to a physical-biological model for the Bermuda Atlantic time series station. J Mar Syst 79(1–2):144–156 Moore TM, Matear RJ, Marra J, Clementson L (2007) Phytoplankton variability off the Western Australian Coast: mesoscale eddies and their role in cross-shelf exchange. Deep Sea Res II 54:943–960 Natvik L, Evensen G (2003a) Assimilation of ocean colour data into a biochemical model of the North Atlantic—Part I. Data assimilation experiments. J Mar Syst 40:127–153 Natvik L, Evensen G (2003b) Assimilation of ocean colour data into a biochemical model of the North Atlantic—Part II. Statistical analysis. J Mar Syst 40:155–169 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink Ocean Data Assimilation System (BODAS). Ocean Model 21(1–2):46–70 Oschlies A, Schartau M (2005) Basin-scale performance of a locally optimized marine ecosystem model. J Mar Res 63(2):335–358 Schartau M, Oschlies A (2003a) Simultaneous data-based optimization of a 1Decosystem model at three locations in the North Atlantic: Part I—method and parameter estimates. J Mar Res 61(6):765–793 Schartau M, Oschlies A (2003b) Simultaneous data-based optimization of a 1Decosystem model at three locations in the North Atlantic: Part II—standing stocks and nitrogen fluxes. J Mar Res 61(6):795–821 Schlitzer R (2002) Carbon export fluxes in the Southern Ocean: results from inverse modeling and comparison with satellite based estimates. Deep Sea Res II 49:1623–1644 (Special Volume on the Southern Ocean) Vichi M, Pinardi N, Masina S (2007) A generalized model of pelagic biogeochemistry.for the global ocean ecosystem. Part I: theory. J Mar Syst 64(1–4):89–109
Part V
Data Assimilation
Chapter 13
Introduction to Ocean Data Assimilation Edward D. Zaron
Conventional ocean modeling consists of solving the model equations as accurately as possible, and then comparing the results with observations. While encouraging levels of quantitative agreement have been obtained, as a rule there is significant quantitative disagreement owing to many sources of error: model formulation, model inputs, computation and the data themselves. Computational errors aside, the errors made both in formulating the model and in specifying its inputs usually exceed the errors in the data. Thus it is unsatisfactory to have a model solution which is uninfluenced by the data. Bennett (Inverse Methods in Physical Oceanography, 1st edn. Cambridge University Press, New York, p.€112, 1992)
Abstract╇ Data assimilation is the process of hindcasting, now-casting, and forecasting using information from both observations and ocean dynamics. Modern ocean forecasting systems rely on data assimilation to estimate initial and boundary data, to interpolate and smooth sparse or noisy observations, and to evaluate observing systems and dynamical models. Every data assimilation system implements an optimality criterion which defines how to best combine dynamics and observations, given an hypothesized error model for both. The realization of practical ocean data assimilation systems is challenging due to both the technical issues of implementation, and the scientific issues of determining the appropriate set of hypothesized priors. This chapter reviews methodologies and highlights themes common to all approaches.
E. D. Zaron () Department of Civil and Environmental Engineering, Portland State University, P.O. Box 751, Portland, OR 97207, USA e-mail:
[email protected] URL: http://web.cecs.pdx.edu/~zaron/
A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_13, ©Â€Springer Science+Business Media B.V. 2011
321
322
E. D. Zaron
13.1â•…Introduction There are many technologies for observing the ocean. Examples include instruments for taking measurements at fixed points, such as acoustic Doppler velocimeters; horizontal and vertical profilers, such as towed conductivity-temperature-pressure sensors (CTDs); and spatially extensive, nearly instantaneous or synoptic measurements, such as satellite imagery or radiometry. Every measurement system is defined by the physical variables it measures, spatial and temporal resolution and averaging characteristics, which determine how high frequency information is either smoothed or aliased to lower frequencies, and the noise and bias properties of the instrumentation. Given the large size of the ocean, and the great expense of measurement and observation systems, no practicable observation systems completely determine the state of the oceans. Hence, models are necessary to complement the basic observations. However, the ocean itself is a turbulent fluid, and small changes in initial conditions can have a significant impact on the subsequent evolution of the fluid. Even if it were possible to completely solve the partial differential equations of fluid motion, the prediction of the oceanic state would be limited by the accuracy of initial conditions and boundary data (e.g., the air-sea flux of momentum). In practice, numerical ocean models truncate the degrees of freedom of the continuum equations, and the parameterization of the neglected motion on the resolved scales is a significant source of error in our ability to simulate the fluid flow accurately. It is these considerations, the relative paucity of observational data and the limitations of models, which provide the impetus for data assimilation. Ocean models are capable of accurately simulating dynamics at resolved scales, with exact or nearly exact conservation of properties such as mass, energy, or potential vorticity, depending on the model. The goal of data assimilative modeling is to produce estimates for the oceanic fields of temperature, salinity, pressure, and three-dimensional velocity, which are maximally consistent with observations and numerical model dynamics, allowing for errors in both. Progress in ocean data assimilation has been enabled by advances in computing machinery over the last 30 years, but the theory and techniques of data assimilation have a long history with mathematical roots in probability and estimation theory, inverse theory, and the classical calculus of variations. The operational roots of data assimilation are closely tied to the weather prediction community, which has long dealt with the problem of how to smooth and interpolate sparse measurements in order to optimize subsequent weather predictions (Daley 1991). This introduction to the subject of ocean data assimilation is selective. The goal is to touch on major points of theory and implementation, introducing common themes which are developed in the primary literature. After reading this chapter, the reader should be well-prepared to survey the the many textbooks and review articles on ocean data assimilation (Bennett 1992, 2002; Wunsch 1996; Talagrand 1997; Kalnay 2003; Evensen 2006). The article begins by reviewing the purposes of data assimilation. Then Bayes’ Theorem is applied to derive optimal interpolation and the Kalman filter. The first part of the article closes by outlining the basic components common to all data
13â•… Introduction to Ocean Data Assimilation
323
assimilation systems. The second part of the article provides a background for the analysis of data assimilation systems, describing technical issues of implementation, and scientific issues of covariance estimation. Notation and nomenclature vary widely in the primary literature, and an effort has been made to use a consistent but minimal notation which is in accord with recent usage. Two appendices are attached which provide, respectively, annotated definitions of significant terms and pointers to web-resources for data assimilation.
13.2╅The Purpose of Data Assimilation Much like the ancient Indian parable of the seven blind men and the elephant (Strong 2007), there are several different perspectives on the purpose of data assimilation. The parable describes how the men perceive the elephant, each drawing a very different conclusion about its shape or function. One man felt the tail, and concluded an elephant is like a rope; another felt the tusk, and noted its spear-like properties; etc. Similarly, the field of ocean data assimilation has developed in a number of directions, each with a different goal or point of emphasis. The literature is diverse, and the disparate nomenclature can sometimes obscure common themes and methodological approaches. The main themes and goals of data assimilation may be briefly summarized as follows: Interpolation, Extrapolation, and Smoothing╇ The purpose of data assimilation is to estimate the state of the ocean using all information available, including dynamics (e.g., the equations of motion) and observations. The end goal of data assimilation is to produce an analysis, an estimate of oceanic fields which are smoothly and consistently gridded from sparse or irregularly distributed data, and in which the dynamical relationships amongst the fields are consistent with prior physical considerations, such as geostrophic balance. Where measurements are sparse, the analysis fields ought to interpolate the measurements, or nearly so, with allowance for the measurement error. Where measurements are absent, they ought to be extrapolated from nearby measurements, consistent with the assumed dynamics. Where measurements are dense, redundant, or particularly inaccurate, the analysis fields ought to be plausibly smooth, containing no more structure than is warranted by the observations and the dynamics. This view of data assimilation forms the basis for most of the work in ocean data assimilation, some representative works being Oke et€al. (2002), Paduan and Shulman (2004), and Moore et€al. (2004). Several groups are currently involved in real-time ocean analysis, incorporating diverse forms of data (i.e., ARGO float profiles, XBT data, sea-surface temperature, etc.) into global and regional ocean models. Global real-time analyses and forecasts are produced by the European Center for Medium Range Forecasts (Balmaseda et€ al. 2007), the Australian Bureau of Meterolology (2009), the U.S. National Center for Environmental Prediction (2009), and others. Retrospective hind-casts, also called reanalyses, are produced by several groups, including the Jet Propulsion Laboratory (2009) and the University of Maryland (2009).
324
E. D. Zaron
Parameter Calibration╇ The purpose of data assimilation is to develop the most accurate model of the ocean, by systematically adjusting unknown or uncertain parameters so that model predictions are maximally congruent with calibration data. The emphasis is on adjusting what may be highly uncertain or difficult-to-measure physical parameters, e.g., scalar parameters involved in turbulence sub-models, or fields, e.g., the sea-bed topography. From the perspective of parameter calibration, the end goal of data assimilation is to produce the best possible model for future prognostic or data assimilative studies, which maximizes the information gained, neither over- or under-fitting the calibration data. There is a significant oceanographic literature in this area, but parameter estimation generally involves the solution of strongly nonlinear inverse problems, which can be more complex than state estimation (Lardner et€al. 1993; Heemink et€al. 2002; Losch and Wunsch 2003; Mourre et€al. 2004). Hypothesis Testing╇ The purpose of data assimilation is to systematically test or validate an ocean prediction system, which includes as subcomponents a model of hypothesized ocean dynamics, its error model, and an error model for the validation data. The thorough study of analysis increments, model inhomogeneities, data misfits, and their relations to the hypothesized dynamics and error models is emphasized. The end goal from this perspective is a definitive test of the ocean prediction system, and an analysis of the primary flaws in the dynamical model or observing system. Dee and daSilva (1999), Muccino et€al. (2004) and Bennett et€al. (2006) are representative examples. Once a prediction system has been validated, by formal hypothesis testing or other means, the data assimilation system can be used to design and predict the performance of future observing systems. For this purpose an observing system simulation experiment (OSSE) may be conducted, using so-called identical twin experiments, to assess the impact of present and future observational assets or data sources (Atlas 1997). A recent application to coupled ocean/atmospheric modeling for the detection of climate change is found in Zhang et€al. (2007). Summary: Operational Ocean Data Assimilation in Practice╇ Probably the most widely-used approach to ocean data assimilation involves a sequential assimilation x xx obs
xxxx obs
model analysis
xx x obs
model analysis
model analysis
Fig. 13.1↜渀 Sequential analysis of observations binned in time. The red lines indicate the ocean state trajectory predicted from initial conditions at the analysis times (↜red dots). Observations obtained at within the analysis window (↜green) are binned and assimilated only at the analysis times
13â•… Introduction to Ocean Data Assimilation
325
reanalysis, retrospective analysis, or hindcast x
xx
x
x
x
x
x
x
x
x
x x
xx
x x x
Fig. 13.2↜渀 Reanalysis or smoothing of observations. Reanalysis or smoothing finds the ocean state trajectory (↜red) most consistent with the observations (↜green) and the dynamical model within a time window
of observations, as depicted in Fig.€13.1. The ocean model is integrated forward in time from initial conditions, providing a first guess or background field at the subsequent analysis time. Data are assimilated to produce an analysis by optimally combining information from the model and observations. The analysis is used as the initial condition for the next prediction cycle, and the process repeats. The process of estimating the ocean state through series of sequential analysis steps is a type of signal filtering, for which the Kalman Filter is the prototype (Gelb 1974), and most sequential ocean data assimilation methods can be analyzed from this perspective. A sequential analysis procedure assimilates the observations, but the ocean state estimate obtained is discontinuous and not consistent with the model dynamics or boundary conditions at the analysis times. To obtain state estimates which are continuous, it is necessary to use the Kalman Smoother or related method (Fig.€13.2). This mode of data assimilation is often used for hindcasting or reanalysis; although, the term reanalysis is also used to denote the sequential analysis of historical data, particularly in operational weather prediction, where such reanalyses are performed using state-of-the-art techniques or more complete data sets than were originally available. For ocean forecasting systems, the ocean state at the end of the smoother time window is the now-cast which is used as initial conditions for the ocean forecast. Because smoothing algorithms compute an analysis over an entire time-window, while filter algorithms compute an analysis at a single time, smoothers are generally more computationally expensive than filters. The development of smoother algorithms which are computationally practicable is a goal of recent efforts in ocean prediction (Powell et€al. 2008). In practice, a type of fixed-lag smoother may be used (Fig.€13.3), which assimilate observations over some time window prior to the current now-cast. For example, in 4D-Var assimilation one finds the initial conditions and boundary conditions which are most consistent with the observations during an assimilation interval, and the model integration is carried forward in time to provide predictions over a subsequent forecast interval.
13.3â•…Mathematical Formulation Data assimilation involves the optimal utilization of information from different sources. Bayes’ Theorem is a concise foundation for expressing data assimilation methods since it is concerned with the combination of information as expressed
326
E. D. Zaron assimilation x
x
forecast
x x assimilation
forecast
x x
x
x assimilation x
x
forecast x x
Fig. 13.3↜渀 4D-Var. In the 4D-Var algorithm the initial conditions are found (↜red dots) to optimize the ocean state trajectory (↜red line) with respect to observations (↜green) within the assimilation window
in probabilities. Optimization criteria and statistical estimators may be derived by considering the posterior probability of the state to be estimated, conditioned on the values of the observations. A non-rigorous introduction is presented here;details concerning the applicability of probability densities to function spaces are glossed over. Wahba (1990) contains an introduction to the central issues and is a good entry point to the specialized literature.
13.3.1 Bayes’ Theorem Let PX (↜x) denote the probability density function (pdf) of a random variable X, so that the probability of X lying in the interval (↜x, xâ•›+â•›dx) is given by PX(↜x)dx. To be concrete, suppose that X represents an oceanic state, and Y represents a measurement of the state. Measurements contain error, so assume that Yâ•›=â•›Xâ•›+â•›, where is the measurement error, a random variable. In principle there is a probability density for the oceanic state PX(↜x) which is a function of the forcings on the ocean, taken to be unknown random variables. Likewise, there is a probability density which describes the measurement errors, Pε(↜ε), which is usually expressed in terms of PY(↜yâ•›|â•›x), the pdf of observations conditioned on the oceanic state, x. The joint probability of the state and the measurements PX,Y(↜x, y) (the probability of x and y) and the conditional probability are related by the definition,
PX,Y (x, y) = PX (x|y)PY (y).
(13.1)
13â•… Introduction to Ocean Data Assimilation
327
Bayes’ Theorem is derived by combining this relationship with its counterpart, PX,Y(↜x, y)â•›=â•›PY (↜yâ•›|â•›x)PX(↜x), and solving for the conditional probability (Ross 2005),
PX (x|y) = PY (y|x)PX (x)/PY (y).
(13.2)
Equation€(13.2) is a simple prescription for combining information from both the dynamics and the data. Given estimates of the errors in initial conditions, boundary forcing, or other model inhomogeneities, one can, in principle, find PX(↜x), the probability distribution of the oceanic state, in the absence of measurements. Knowledge of the measurement system determines PY(↜yâ•›|â•›x), the probability distribution of the observations, conditioned on the oceanic state. With these quantities in hand, it is simply a matter of computation to find the posterior probability distribution of the oceanic state conditioned on the observations, PX(↜xâ•›|â•›y). The denominator, PY(↜y)â•›=â•›∫PY(↜yâ•›|â•›x)PX(↜x)dx, can be computed; however, since this pdf is independent of x, it merely serves to normalize PX(↜xâ•›|â•›y). There is a choice regarding whether to use a maximum likelihood, mean, or median estimator, but these all coincide if the assumed statistics are multivariate Gaussian, and the mean is used almost universally. The following factors are generally of greater importance and vary widely among ocean data assimilation systems. The Definition of Oceanic State Variables╇ Implicit in the above discussion is the assumption that the oceanic state consists of the fields of momentum, buoyancy, and pressure within a region of the ocean, within some time interval. The number of state variables may be considerably reduced in practice, depending on context, by using diagnostic relations amongst the variables. Dimensionality is important. Consider, for example, the fields in a regional ocean model defined on a spatial grid of NXâ•›=â•›200 by NYâ•›=â•›200 horizontal grid points, and NZâ•›=â•›30 vertical grid points, at NTâ•›=â•›1,000 time points. A sequential assimilation scheme might estimate initial conditions of sea-surface height at Nâ•›=â•›NXâ•›×â•›NY grid points, resulting in a cardinality of Nâ•›=â•›4·104 for the state variable X. Alternately, if X is taken as the initial conditions for the Nâ•›=â•›NXâ•›×â•›NYâ•›×â•›NZâ•›×â•›4 values of the horizontal velocity, buoyancy, and pressure field (↜u, v, b, p), one has Nâ•›=â•›4.8â•›·â•›106 unknowns in the state vector. In some versions of so-called “weak-constraint 4-D variational assimilation” (W4D-Var), one seeks an optimal state estimate of the above fields at all NT time steps, which yields a cardinality of Nâ•›=â•›4.8â•›·â•›109 for the unknown state. Complexity of the Error Models╇ If the error in the initial conditions, boundary conditions, etc. can be adequately approximated by multivariate Gaussian distributions, then the implementation of the Bayesian analysis procedure is greatly simplified. But the specification of Gaussian distributions requires estimates of the means, variances, and cross-covariances of the relevant fields, in both space and time. Prescribing realistic error models can be a challenge. Complexity of the Model Dynamics╇ Even if the errors are correctly described by Gaussian distributions, the model dynamics may be sufficiently nonlinear to render the pdf of the model state PX(↜x) non-Gaussian. Differing treatments of the nonlin-
328
E. D. Zaron
earity in the model dynamics yield both formal and practical differences between various data assimilation algorithms.
13.3.2 Example 1: Estimation of a Scalar To make ideas concrete, consider first a trivial example, namely, the estimation of a scalar by combining information from a climatology and a single observation. Assume that one wishes to estimate a scalar, say, temperature, denoted x. A climatology has been constructed, from which can be approximated the probability distribution function −1/2 (x − xb )2 (13.3) PX (x) = 2πσx2 exp − 2σx2 where the background xb is the climatological mean. In other words, the climatology (which contains no dynamics, but is prescribed from prior data) is used for the background, with expected deviation σx. A thermometer provides an observation of temperature with finite accuracy. Given temperature, x, the probability distribution of the observations is assumed to be a Gaussian also, −1/2 (y − x)2 2 (13.4) . PY (y|x) = 2πσy exp − 2σy2 In other words, the measurements are assumed to be unbiased, and the standard deviation of the measurement error is σy. It is left as an exercise to the reader to use the definition
∞ PY (y) = PY (y|x)PX (x)dx
(13.5)
−∞
to show that PY(↜y) is Gaussian, with mean xb and variance σx2 + σy2 . Application of Bayes’ Theorem is straightforward, and one finds that PX(↜xâ•›|â•›y) is Gaussian. The maximum likelihood, mean, and median estimators all coincide, yielding
xa = xb + σx2 (σx2 + σy2 )−1 (y − xb ).
(13.6)
The variance of this estimate is
σa = (σx−2 + σy−2 )−1 .
(13.7)
In spite of its simplicity, this example shows some of the key features of advanced linear data assimilation methods. First, note that the optimal estimate in Eq. (13.6)
13â•… Introduction to Ocean Data Assimilation
329
is a linear combination of the background xb and the residual yâ•›−â•›xb. The term δxaâ•›=â•›yâ•›−â•›xa, given by
δxa = σx2 (σx2 + σy2 )−1 (y − xb )
(13.8)
is called the analysis increment. Note the limits σxâ•›→â•›0 (perfect background) and σyâ•›→â•›0 (perfect data), which yield xaâ•›=â•›xb and xaâ•›=â•›y, respectively. Furthermore, the estimated variance of the optimum (13.7) is less than the variance of either the background or the data, separately; combining information from the background and the observations has reduced uncertainty. 13.3.2.1â•…Example 2: Estimation of a Vector (Optimal Interpolation) It is customary and instructive to generalize the above example to the estimation of a vector, assuming all errors Gaussian. This is Gauss-Markov smoothing, which forms the basis for many estimation algorithms. When the unknown vector represents values on a regular spatial grid, this procedure is known as optimal interpolation (Bretherton et€al. 1976). The notation used here follows Ide et€al. (1997). Assume one wishes to estimate a vector x ∈ R N , given a background xb, and a vector of observations y ∈ RM . Optimal interpolation (also called objective analysis) is typically performed in order to smoothly interpolate sparse observations onto a regularly-spaced spatial grid, in which case x might represent, say, the value of sea-surface height at the grid points. Assume that each element of the observation vector y = {yi}M i=1 may be represented as the action of a linear operator on x(↜ti),
yi = hi x(ti ),
(13.9)
where hi ∈ R1×N . For example, hi might extract the value of x at spatial coordinate (↜i, λi) in geographic latitude-longitude coordinates. Next, define the matrix H ∈ RM ×N by collecting the measurement operators together so that yâ•›=â•›Hx. More generally, the observation operator H maps the model variables x to an equivalent of the observation vector, in terms of the locations and characteristics (e.g., units) measured. To apply Bayes’ Theorem, it is necessary to state the probability densities of xâ•›−â•›xb and the measurement error ε = y − Hx. These shall both be assumed to be multivariate Gaussian with zero means; the covariance of B ∈ RN ×N , i.e.,
T
(x − xb )(x − xb ) = B,
(13.10)
and the covariance of ε is denoted R ∈ R M ×M . Applying Eq.€(13.2), one finds the pdf of x conditioned on y is proportional to exp ( − 12 J (x)), where
J (x) = (x − xb )T B−1 (x − xb ) + (y − Hx)T R−1 (y − Hx),
(13.11)
which is the objective function that forms the basis for so-called variational data assimilation. The objective function, also called the cost function or penalty function, is to be minimized with respect to vector x.
330
E. D. Zaron
With a bit of linear algebra, one finds the value xâ•›=â•›xa which minimizes J is
xa = xb + K(y − Hxb ),
(13.12)
where the analysis increment is δxaâ•›=â•›K(yâ•›−â•›Hxb), and K takes the form
K = BHT (HBHT + R)−1 .
(13.13)
The full expression for the analysis error, the error covariance of xa, is denoted Pa ∈ RN ×N ,
Pa = (B−1 + HT R−1 H)−1 .
(13.14)
Remarks: 1. Equation€(13.12) can be derived as the Best Linear Unbiased Estimator (BLUE), which minimizes expected error (eTK (xa − x))2 , where ek ∈ RN is the basis vector pointing in direction k. Similarly, the estimator also minimizes the expected mean square error Tr((xaâ•›−â•›x)(xaâ•›−â•›x)T)/N. These facts are the basis for the equivalence of the BLUE, variational-, and Kalman Filter-based state estimates when the model dyanamics and measurement operators are linear. 2. When Optimal Interpolation is used in practice, the above formulas are often simplified so that the analysis at each grid point is computed from just the data within some nearby radius of influence Lorenc (1986); Daley (1991). 3. J (x) written above is also called the penalty function or cost function. The analysis field xa is its minimizer. Also, 12 J is the negative log-likelihood function when the errors are Gaussian. 4. Because H is linear, J is convex and possesses a unique minimum. When H represents a nonlinear operator, there may be multiple minima. 5. Additional constraints may be added to the objective function, say, to suppress certain dynamics. These can complicate the solution procedure considerably and may obscure the failure of B or R to properly account for the covariance structure of the background and measurement errors. 6. The condition number of the objective function refers to the ratio of the largest to smallest eigen-values of the Hessian matrix of second derivatives of J. The eigenvalue spectrum of the Hessian can be interpreted as the curvature of principle axes of the iso-surfaces of J. 7. When the assumptions regarding the Gaussian errors are correct, the Hessian matrix H = ∂ 2 J /∂x2 and the analysis error covariance Pa are related by Pa = 21 H−1 . 8. The above formalism can be applied to continuous fields, rather than vectors in RN. In that case x is generally a vector function and J is then a penalty functional. The stationarity condition for the minimum ∇J = 0 must be derived using the calculus of variations; the result is the Euler-Lagrange equation. There are close ties to the theory of smoothing splines, with B−1 being a positive-definite-symmetric differential operator (Wahba 1990).
13â•… Introduction to Ocean Data Assimilation
331
Fig. 13.4↜渀 Example 2: Optimal Interpolation. The left panel shows the sea-surface height (↜x, y) to be estimated from observations of n along idealized satellite ground tracks (↜open black circles filled with color indicating measured value), and from three observations of the surface current (↜filled black dots with arrows). The right panel shows the interpolated field (↜black contours) overlaid on the true sea-surface height. Parameters in the upper-right-hand corner are defined in the text. One can see that the large-scale flow features, such as the large anti-cyclonic eddy, have been reconstructed in the interpolated fields
9. The interpretation in terms of continuous fields is essential to properly understanding the conditioning of the objective function as model resolution is increased to the continuum limit (Bennett and Budgell 1987; Bennett 1992). The language of linear algebra is not adequate for analyzing spatial regularity (differentiability) of the analysis increments. An example of optimal interpolation is shown in Fig.€13.4. The vector x to be estimated is the sea-surface height on a regular 1€km grid within a box of dimensions 100€km by 100€km. Denote the sea-surface height field by (↜x, y), with elements of x being (↜xj, yj) on the grid. In this idealized setup there are 30 observations of sea-surface height yi = eiT x
for iâ•›=â•›1,…, 30 along three satellite ground tracks (color-filled dots). It is assumed that the currents are in geostrophic balance with the surface pressure gradient, and there are 3 observations of near-surface current (black-dots with arrows). The ucomponent is given by yi = −
g ∂η f ∂y
for iâ•›=â•›31,…, 33, and the v-component is given by yi = +
g ∂η f ∂x
332
E. D. Zaron
for iâ•›=â•›34,…, 36. The standard deviation of observation errors is 0.8€ cm for seasurface height, and 4€cm/s for currents. Finally, it is assumed that the background field is zero xbâ•›=â•›0, and the spatial covariance of is bell-shaped with Lxâ•›=â•›12.5€km correlation scale, (x − x )2 + (y − y )2 2 , η(x, y)η(x , y ) = σ exp − 2L2x where σâ•›=â•›1€cm. Figure€13.4 shows the general scale of flow features which can be identified from this rather limited, and idealized, observational array. The spatial density of the measurements has been chosen to be compatible with the 12.5€km correlation scale of the sea-surface height. Figure€ 13.5 illustrates what happens when the observing array either over- or under-samples the variability of the unknown field. In the case where the unknown field has a correlation scale of Lxâ•›=â•›3€ km (Fig.€ 13.5, left panel), there is not enough information in the measurements to estimate the sea-surface height field. Although data assimilation can potentially improve out knowledge of unknown or uncertain oceanic fields, it cannot add significant information if the observations fail to constrain the dominant scales of variability in the fields. A more ideal situation is shown in the right panel of Fig.€13.5, where the correlation scale of the field is generally larger than the spacing of the observations. The determination of whether a given observing system constrains the variability of the field to be estimated is an important topic in the analysis of data assimilation systems. Results in this area are found in the antenna analysis of (Bennett 1992). A
Fig. 13.5↜渀 Impact of Correlation Scale, Lx. The left panel illustrates the outcome of attempting to reconstruct a field which is severely undersampled. The correlation scale of the unknown field is Lx╛=╛3€km, which is less than the spacing between the observations. The right panel illustrates the opposite situation, where the observations generally well-sample the field, which, in this case, has a correlation scale of 25€km
13â•… Introduction to Ocean Data Assimilation
333
complementary description in terms of information content and degrees of freedom is found in Stewart et€al. (2008). An important generalization of the above is considered next, with particular attention paid to the forecast cycle, which leads to an evolution equation for P (the Kalman Filter), as well as a consideration of nonlinearity in both the ocean dynamics and the measurement operators.
13.3.3 Sequential Filtering Algorithms Consider now the problem of sequential estimation, where one assumes that initial conditions x(↜ti) at ti propagate forward to time ti╛+╛1 according to x(↜ti╛+╛1)╛=╛M(↜ti╛+╛1, ti) [x(↜ti)]╛+╛i, where i is model noise with zero mean and known covariance. One also has a vector of observations yi collected in the interval [ti, ti╛+╛1]. Assume one intends to cycle the assimilation beginning with a previous analysis at ti, leading to a background forecast at ti╛+╛1, and ending with an analysis ti╛+╛1, as depicted in Fig.€13.1. The key idea with sequential filters is that the analysis covariance from step i becomes the forecast covariance at step i╛+╛1; hence, the covariance is evolved together with the state itself. The notation xfi denotes the background forecast and xai the analysis, both at ti. For consistency with notation in the literature, Pfi is used for the forecast covariance, and Pai is used for the analysis covariance at time ti. Because i is unknown, the forecast is computed from the previous analysis with
f
xi+1 = M(ti+1 , ti )xia .
(13.15)
Assuming ηi ηiT = Qi is the model noise covariance, the forecast error covariance evolves according to
f
Pi+1 = M(ti+1, ti )Pia M(ti+1, ti )T + Qi .
(13.16)
With these two pieces of information, one can find the analysis at tiâ•›+â•›1 using the previously-derived results from Bayes’ Theorem,
f
a xi+1 = xi + Ki (yi − Hi xia ),
(13.17)
where the Kalman gain matrix Ki is
Ki = Pi+1 HiT (Hi Pi+1 HiT + Ri )−1 . f
f
(13.18)
The analysis error covariance,
f
−1
a Pi+1 = ((Pi+1 )
+ HiT Ri−1 Hi )−1 ,
(13.19)
334
E. D. Zaron
is customarily written in terms of the Kalman gain and forecast covariance by using the Sherman-Morrison-Woodbury formula (Golub and Van Loan 1989),
f
(13.20)
= (I − Pi+1 HiT (Ri + Hi Pi+1 HiT ) Hi )Pi+1
(13.21)
= (I − Ki Hi )Pi+1 .
a Pi+1 = Pi+1 − Pi+1 HiT (Ri + Hi Pi+1 HiT )−1 Hi Pi+1 f
f
f
f
f
f
−1
f
(13.22)
Remarks: 1. Equations€(13.19) and (13.22) are equivalent, but note that (13.22) only requires the inversion of an Mâ•›×â•›M matrix (in the definition of Ki), while Eq. (13.19) appears to require the inversion of an Nâ•›×â•›N matrix. This reduction of apparent rank is a consequence of the fact that at most M degrees of freedom are actually constrained by the data. 2. Implicit in the above notation is the linearity of the model evolution operator, M(↜tiâ•›+â•›1, ti). When this operator is linear, the above algorithm constitutes the Kalman Filter. When M(↜tiâ•›+â•›1, ti) is nonlinear, a linear approximation must be used in the forecast covariance evolution Eq.€(13.16), and the above algorithm is called the Extended Kalman Filter. 3. Recall that the analysis fields are a function of the model dynamics, data values, observation operators, model error covariance, and observation error covariance. If the model error covariance (the system noise) is under-estimated, the filter equations can lock-on to an overly optimistic estimate of the forecast error covariance. Once this occurs, further data are discounted and do little to improve the analysis. It is essential to monitor the performance of data assimilation algorithms and verify that analysis increments and innovation vectors are within nominal ranges. 4. In the Extended Kalman Filter, the evolution equations for the model or forecast error covariance matrix can be unstable, particularly when the time interval [ti, tiâ•›+â•›1] is long compared to the characteristic time associated with nonlinearity in the dynamic model (Miller et€al. 1994). Stability is a generic problem in nonlinear data assimilation, and there are many approaches to achieve it: ensemble methods (Evensen 1997), including particle filters (Ambadan and Tang 2009); suboptimal, but stable, approximations of dyanmics (Bennett and Thorburn 1992); and cycling, or reduced time-window, approaches (Ngodock et€al. 2009).
13.4â•…Summary: Components of Data Assimilation Systems The general approach for developing data assimilation systems is outlined above. There are many, many, details specific to particular applications and solution algorithms, which will be introduced in subsequent chapters by Brasseur (Kalman Fil-
13â•… Introduction to Ocean Data Assimilation
335
ters) and Moore (Variational Assimilation). The following list defines the elements common to all approaches. • There is a definition of the system state which is to be estimated. • There is a dynamical model which provides the background estimate of the system state. • There is a definition of the control variables which are also to be estimated, including a statistical model for the system noise. • There is a definition of the observing system, including a statistical model for the measurement error. • There is an optimality criterion which incorporates the above components. • There is a solution algorithm which computes the analysis state and other quantities of interest.
13.5â•…Analysis of Data Assimilation Systems When students first approach the data assimilation literature, it is not uncommon for them to be daunted by the apparent diversity of methods and approaches to assimilating data. In approaching this diversity, it may be helpful to make a distinction between the scientific or oceanographic content of the assimilation problem vs. the technical or engineering aspects. The scientific content enters the assimilation through the hypothesized dynamics and the error covariances, which are used to create the optimality criterion (item 5 in the previous list of “Components of Data Assimilation Systems”). The technical aspects are concerned with practical implementation of the dynamics, solution algorithms, etc. These aspects of the data assimilation literature are not really orthogonal, but it can be helpful for the novice to regard them as so in order to understand the significance of particular implementations, system designs, or optimality criteria. In this section explicit distinction is made between the technical and scientific aspects of data assimilation, and this is used as the basis for analysis of data assimilation systems. Section€13.5.1 reviews solution algorithms for solving the most widely-used least squares optimality criteria in ocean data assimilation. Section€13.5.2 introduces elements of error covariance modeling and validation, and observing array design, as these are often at the core of the scientific content of data assimilation.
13.5.1 Implementation and Solution Algorithms As mentioned previously, solution methods are determined by the following considerations: 1. The cardinality of the state space. Typically N is so large that the Nâ•›×â•›N matrices written in the previous section cannot be explicitly constructed. Instead, one con-
336
E. D. Zaron
siders the computations in terms of vector-matrix multiplication, and the large matrices are never constructed. 2. The dimension of the observation vector. For linear models and observing systems it can be shown that the M observations constrain only an M-dimensional observable subspace of the N-dimensional state space. Hence, computational efficiency maybe optimized by restricting operations to the M-dimensional observation space. 3. The effective rank of the background covariance. In practice the dimension M is too large to carry out the above algorithms as written. Instead, the effective rank or degrees of freedom are truncated in some way, leading to sub-optimal approximations to the optimality criterion. The following survey of solution algorithms highlights these considerations in the development of practicable data assimilation algorithms. 13.5.1.1â•…Variational Data Assimilation Variational data assimilation algorithms are so-called because they are derived from a stated objective function, J (x), and the calculus ofvariations or ordinary derivatives are used to derive the first-order optimality condition, ∇J (x = xa ) = 0 An iterative solver, such as conjugate-gradient or Newton’s method, is used to solve the optimality condition, for example,
B−1 (xa − xb ) + HT R−1 (y − Hxa ) = 0,
(13.23)
obtained from Eq.€13.11. See Navon and Legler (1987) or Zou et€al. (1993) for an overview of the methodology and pointers to the specialized literature. Note that the size of B ∈ RN ×N makes the computation of B–1 in (13.23) impossible except in very special cases. Assuming it is possible to compute the vectormatrix product Bx without explicitly constructing B, one can re-write the optimality condition as
(I + BHT R−1 H)xa = xb + BHT R−1 y
(13.24)
where I is the Nâ•›×â•›N identity matrix. Equation€(13.24) is sometimes referred to as the primal formulation of the variational data assimilation problem, in contrast to the dual formulation, derived below. The so-called 4D-Var algorithm can be cast as a minimization of the above type when the dynamics and measurement operators are linear. In this case, xb represents the initial conditions of the model, and the transpose of the model evolution operator, MT, the so-called adjoint model, is implicit in the definition of HT (Talagrand and Courtier 1987). Note that preconditioners are frequently used to accelerate the convergence of the iterative solver. Also, the iterative solvers are truncated after some small number, Pâ•›<â•›M, of predetermined steps or when the elements of the innovation vector are comparable to the measurement error (Rabier et€al. 2000).
13â•… Introduction to Ocean Data Assimilation
337
13.5.1.2â•…Incremental 4D-Var The incremental formulation of 4D-Var writes the above optimality condition in terms of δxaâ•›=â•›xaâ•›−â•›xg, where xg is a first guess field, which may or may not coincide with the background. Equation€(13.24) becomes
(I + BHT R−1 H)δxa = xb − xg + BHT R−1 (y − Hxg ).
(13.25)
The motivation for this approach is to make a sub-optimal approximation to the optimality conditions wherein δxa is computed using a (computationally tractable) low-resolution linear model driven by the residual vector (yâ•›−â•›Hxg) computed from a high-resolution model. A complete description of the algorithm involves an operator to map between the the high- and low-resolution versions of the model state vectors. Treatment of the nonlinearity generally involves linearization around xg, xb, or their linear combination. In addition to the iterative solution of the linear system, (13.25), an outer level of iteration around a sequence first guess states may be necessary in strongly nonlinear problems (Ghil 1989; Courtier et€al. 1994). More generally, the incremental formulation suggests the idea that one can use different models to compute xg and δxa. For example, if the model is too computationally intensive to embed in the iterative solver, it is possible to use reduced physics or reduced resolution for M inside the H operator on the left-hand-side, while still using the complete model to compute Hxg on the right-hand-side. 13.5.1.3â•…The Dual Formulation Notice that the left-hand-side of Eqs.€(13.23), (13.24), and (13.25) all involve Nâ•›×â•›N matrices. The linear algebra can be considerably simplified by noting that HTR−1H is of rank M. Application of the Sherman-Morrison-Woodbury formula leads to the following equivalent form of the optimality condition (13.23),
xa = xb + BHT w
(13.26)
(HBHT + R)w = y − Hxb ,
(13.27)
where HBHTâ•›+â•›R is an Mâ•›×â•›M matrix. Solving for the vector w ∈ RM may be done by direct matrix inversion if the construction of the Mâ•›×â•›M matrix on the left-hand-side of (13.27) is feasible; otherwise, iterative solvers may be applied (Egbert et€al. 1994; Amodei 1995). When (13.27) is multiplied by R−1, the conditioning of (13.24) and (13.27) are formally identical (Courtier et€al. 1993; Courtier 1997), and an iterative solver for the latter is known as the Physical Space Analysis System (PSAS) (Cohn et€al. 1998). Note that the analysis is a linear combination of the background and the M columns of BHT, which may be used to diagnose unambiguously features of the analysis corresponding to particular observations. The matrix HBHT is the expected error covariance of the forecast, neglecting measurement noise; its analysis can provide
338
E. D. Zaron
a wealth of information concerning the design of the observing system (McIntosh 1987). As with the primal formulation, preconditioning the linear system (13.27) is an essential part of realistic applications. Treatment of nonlinearity is also an important issue which may be handled via an incremental approach (Chua and Bennett 2001). An iterative solver built on this idea is the core functionality of the Inverse Ocean Model (IOM), a model- and platform-independent data assimilation toolkit (Bennett et€al. 2008; Muccino et€al. 2008). The dual formulation is deeper than it might appear. When xa and xb are taken as functions, and the evolution operator M is a integro-differential operator, Bennett (1992) shows how (13.26–13.27) may be derived from the Euler-Lagrange equations for the extremum of J. 13.5.1.4â•…Kalman Filter Sequential data assimilation algorithms cast as the Kalman Filter equations, (13.15– 13.22), are generally unworkable for ocean prediction systems, particularly when the Nâ•›×â•›N analysis covariance must be explicitly constructed or evolved. Fortunately, many approaches have been developed to handle the linear algebra or make suboptimal approximations to the complete Filter equations. Application to nonlinear systems is also a key issue, and the intersection of sub-optimal approximations and the treatment of nonlinearity is a subtle issue. A very basic and selective overview of the Kalman Filter and its extensions is provided, below. For more detail, the reader is referred to the chapter by Brasseur (this volume).
13.5.1.5╅Model Reduction Model reduction is the name for the class of techniques which directly reduce the dimension N of the state vector. Such a reduction can be used to make the Kalman Filter and covariance evolution equations tractable, and it is also useful as a technique to project the model dynamics onto slowly evolving, better observed, or more predictable dynamics. The most rudimentary approach to model reduction involves projecting the dynamics onto a small number of degrees of freedom by spectral truncation or grid coarsening (Todling and Cohn 1994). But because the optimal analysis states depend not only on the model dynamics, but also on the degrees of freedom in the unknown model system noise, other approaches to reducing degrees of freedom involve reduction via empirical orthogonal functions (EOFs) (Cane et€ al. 1996). Further control of the reduced dimensions can be had through analyzing modeled states and weighting their importance via metrics related to specific phenomena or error metrics (Daescu and Navon 2008).
13â•… Introduction to Ocean Data Assimilation
339
13.5.1.6â•…Error Subspace Statistical Estimation Reflecting on the fact that knowledge of the statistical properties of the background and the model forcing errors are generally poor, Lermusiaux and Robinson (1999) suggested Error Subspace Statistical Estimation (ESSE), a technique which uses reduced rank representations of the forecast and analysis error covariances. Rather than attempting to manipulate Nâ•›×â•›N covariance matrices, in ESSE the covariances are approximated by rank-P objects, constructed to approximate the P most significant modes of uncertainty. This reduced-rank approach to covariance modeling leads to “adjoint-free” versions of variational data assimilation (Logutov and Lermusiaux 2008). Given a rankP decomposition of Bâ•›=â•›UΛUT, where U ∈ RN ×P is orthogonal, and ∈ RP×P is the diagonal matrix of singular values of B, one can explicitly compute HU, from which one may find BHTâ•›=â•›UΛ(HU)T as needed. The Sherman-Morrison-Woodbury formula then provides a means to solve (13.26) and (13.27) using rank-P matrix inversion. 13.5.1.7â•…Ensemble Methods The principle of ensemble methods is to use a set of sample realizations of forecasts to estimate the forecast covariance directly. The Kalman Gain matrix can be estimated from the same ensemble, thus permitting one to compute an ensemble of analyses. From these, the analysis covariance can be estimated, and the process continued (Evensen 2006). The appeal of this approach is that it may, in principle, be applied directly to linear or nonlinear models. Furthermore, even if the statistics of the system noise are not Gaussian, the analysis field approximately satisfies a minimum variance criterion, to within the limits of accuracy of the sample statistics. Two difficulties arise in practice. First, a large number of ensemble members are necessary to accurately estimate off-diagonal elements of the forecast covariance matrix. √ The sample variance of a Gaussian random variable x converges like 2σx2 / E, where the true variance is σx2 and the sample size is E. However, the sample covariance of two correlated random variables x and y, converges like √ 2 (2σxy + σx σy )/ E, where σxy is the covariance. Thus, when the correlation between variables is small the sample covariance is dominated by sampling error. For this reason, the sample covariance must be localized or “tapered” to reduce distant correlations (Szunyogh et€al. 2008). This operation increases the effective rank of the covariance, but it must be done with careful consideration of the dynamical correlations one wishes to preserve. The other principle difficulty is the that the members of the forecast ensemble are not independent after the Kalman Filter has been running. This can contribute to a loss of variance and filter lock-on. Hence, various strategies for covariance inflation and filter re-initialization have been developed. Anderson et€al. (2009) provides a recent overview.
340
E. D. Zaron
13.5.2 Covariance Modeling and Array Analysis Setting aside the solution algorithms for data assimilation with realistic ocean dynamics and observing systems, which are generally technological issues from the perspective of oceanography, there are important scientific questions related to defining correct error models for the dynamics and observing systems. As discussed here, there are three components of the error model which need to be specified a priori. The background error, denoted B or Pf, is the spatial covariance of errors in the field to be analyzed, including the cross-covariances amongst its components. The system noise, denoted Q, is the covariance of the unknown model forcing errors and sub-gridscale parameterizations, an object which describes a space-time correlation structure. Lastly, there is the observation error covariance, R, which is an attribute of the observation system and measurement instruments, but which is sometimes augmented to account for so-called representation errors. For optimal interpolation, 4D-Var, or sequential Kalman Filters, it is necessary to estimate the error in the background solution. In principle this can be estimated from a large ensemble of previous analyses. Another approach relies on making two predictions with different lead-times, say 12 and 24€h, and regarding the difference as an estimate of the forecast error (Hollingsworth and Lonnberg 1986). If an ensemble filter is used, it may be sufficient to retain an ensemble of forecast error fields, and use these to synthesize the sample covariance as needed. Otherwise, the structure of the background error is usually parameterized in terms of an amplitude (variance) and a set of correlation lengths, aligned with some orthogonal basis. Implementations are described in Bennett (1992); Weaver and Courtier (2001); Purser et€al. (2003a, b); Zaron (2006). System noise may arise from incorrect forcing functions, for example, coarselygridded wind stress or open-boundary conditions derived from climatology, or it may arise from dynamical approximations or truncation errors in solving the dynamical equations. In the former case, the errors can generally be characterized by consideration of the data source. In the latter case, it can be difficult to quantify the errors, as they are likely to be state- and resolution-dependent. The observation error covariance, R, should be determinable from the measurement devices, independent of the dynamical model. However, in some cases there is an additional, model-dependent, contribution to R which is called the representation error, or error of representativeness (Oke and Sakov 2008). The representation error is not a measurement error in the conventional sense, but it is an estimate of the variance in the data caused by processes absent in the dynamical model. For example, when sea surface height (SSH) observations are assimilated into quasigeostrophic models, a representation error is introduced to prevent the analysis from being contaminated by the long surface gravity waves present in the SSH data, which are absent from the quasi-geostrophic dynamics. Because the physics of gravity waves are missing from the model, the simulation of these processes cannot be improved through data assimilation, hence, it is regarded as a contribution to data error. Because representation error is due to deterministic dynamics, it may have spatial structure or covariance which is not easily modeled (Richman et€al. 2005).
13â•… Introduction to Ocean Data Assimilation
341
13.5.3 Validation of Error Models Having used the above described techniques to parameterize the errors, it is essential to have a methodology to validate, a posteriori, the hypothesized error models. When the hypothesized dynamics and error models are correct, the minimum value of the objective function J (xa ) is a chi-squared variable with M degrees of freedom (Bennett 1992). This criterion can be use to accept or reject in total the hypothesized dynamics, observations, and their error models. A finer-grained approach can be used to analyze components of the objective function, comparing model and observations, or observation sub-types, separately. For example, let J = J B + J R represent the two parts of the objective function from the background,
J B (x) = (x − xb )T B−1 (x − xb ),
(13.28)
J R (x) = (y − Hx)T R−1 (y − Hx).
(13.29)
and observations,
It may be shown that the expected values of these terms is,
J B (xa ) = T r(HBHT D−1 ),
(13.30)
J B (xa ) = T r(RD−1 ),
(13.31)
and
where D╛=╛HBHT╛+╛R is the matrix appearing on the left-hand-side of Eq.€(13.27). See Talagrand (1999); Desroziers and Ivanov (2001), and Bennett (2002) for derivations and applications. A related class of techniques for calibrating error models is based on the generalized-cross validation statistic, an estimate of the prediction error of the analysis at the observation sites. Using the notation developed above, the generalized crossvalidation statistic is given by
GCV (B, R; y, xb ) =
(y − Hxa )T (y − Hxa ) M(1 − µ/M)2
(13.32)
where µ = T r(RD−1 ). Optimizing this statistic amounts to selecting the error model which approximately maximizes the accuracy of prediction at each data site, using data at all other sites. It is a useful countermeasure to avoid over-fitting the data, which occurs when one simply minimizes the mean-square innovation vector. Applications to data assimilation may be found in Wahba et€al. (1995) and Zaron (2006). The key benefit of these metrics is that they can be computed from a small sample of data assimilative forecast/analysis cycles, and the error models can be re-tuned to yield improved results. Note that direct construction and inversion of Mâ•›×â•›M matri-
342
E. D. Zaron
ces is generally not required, as any of the solution algorithms in Sect.€13.5.1 maybe combined with a randomized trace estimator (Girard 1989; Hutchinson 1989) to evaluate the matrix trace as needed.
13.5.4 Conditioning and Stability There is a deep analogy between the trivial univariate “data assimilation” in Sect.€ 13.3.2, and the multi-variate dual formulation of data assimilation in Sect.€13.5.1.3. Assume that observation errors are uncorrelated, with each observation having the same uncertainty. In other words, assume Râ•›=â•›σ2yI, where σy is the nominal measurement error and I is the Mâ•›×â•›M identity matrix. With this diagonal structure for the observation error covariance, it is possible to write out the solution of (13.27) in terms of an orthogonal decomposition (the singular value decomposition, Golub and Van Loan 1989) of HBHTâ•›=â•›UΛUT,
w = (UUT + σy2 I)−1 (y − Hxb )
(13.33)
= U( + σy2 I)−1 UT (y − Hxb )
(13.34)
=
M i=1
ui
1 uT (y − Hxb ) λi + σy2 i
(13.35)
where Uâ•›=â•›{ui} is an Mâ•›×â•›M orthonormal matrix, and Λ is the diagonal matrix of singular values {λi }M i=1 . Applying the observation operator and projecting onto the i-th orthogonal mode, one finds
uiT Hxa = uiT Hxb +
λi uT (y − Hxb ). λi + σy2 i
(13.36)
2 The key analogy with the univariate case is evident if one identifies σx in Eq€(13.6) with λi in (13.36). In the limit λi σy2 (perfect model), the analysis makes no correction to the background associated with mode-i. In the other limit, σy2 λi (perfect data), the analysis is identically equal to the observation associated with mode-i. Bennett (1992) applies this analysis to evaluate the design of observing arrays. Each mode ui corresponds to a so-called “antenna array mode” associated with the observing system, dynamics, and hypothesized error models. The modes 2 may be classified according to whether they are approximately interpolated σy λi or smoothed λi σy2 . The effective number of degrees of freedom determined by the observing system is given by the number of modes for which σy2 λi . Information about redundant observation sites can be gained from the structure of the ui modes.
13â•… Introduction to Ocean Data Assimilation
343
13.6╅Summary and Conclusions Ocean data assimilation comprises a set of techniques for estimating the oceanic state using as much information as possible, by combining model predictions with observed data in an optimal manner. Optimality is defined by maximum likelihood or minimum variance criteria. Application of these optimality criteria to forecasting the real ocean is difficult due to the large dimensionality or number of degrees of freedom in the oceanic state to be estimated. Consequently, practical algorithms have been developed through approaches which either truncate the representation of the oceanic state, reduce the degrees of freedom to be estimated, or utilize suboptimal criteria for the state estimate. As computing power increases, there are fewer technological obstacles to operational data assimilative ocean forecasting. Scientific attention then focusses on developing and validating error models for the dynamics, initial conditions, andboundary forcing (Chapnik et€al. 2006). Observational impact studies are another new avenue of study, which may be useful for improving data quality control, observing system design, and calibrating covariance models (Baker and Daley 2000; Gelaro and Zhu 2009). Ocean data assimilation has progressed rapidly in recent years, with efforts moving towards and achieving operational status. Recent real-time global operational ocean data assimilation systems are summarized in Cummings et€al. (2009). Additional systems, including large-scale regional data assimilative models, are described in Dombrowsky et€al. (2009). There are many smaller-scale regional efforts as well, for example, in the Gulf of Mexico and Intra-American Sea (Powell et€al. 2009) on the U.S. East Coast (He and Wilkin 2006; Hofmann et€al. 2008; Hoffman et€al. 2008), on the U.S. West Coast (Kurapov et€al. 2005; Li et€al. 2008; Chao et€al. 2009; Moore et€al. 2009; Broquet et€al. 2009), and in many other locations. De Mey et€al. (2007) summarizes the status and attributes of other efforts. Further developments in ocean data assimilation are leading to new syntheses of observations and models, producing new insights into oceanic processes and improved predictive capability. Acknowledgements╇ Partial support for this work was provided by the U.S. National Science Foundation, award OCE-0623540, and the Naval Research Laboratory, award N00173-08-C015. Additional support from the National Oceanic and Atmospheric Administration to attend the GODAE/BlueLink Summer School is gratefully acknowledged.
Appendix Glossary Analysis╇ The analysis is the end-point or result of a data assimilation. It is the best estimate of the true state of the ocean at a given time, or within a given time
344
E. D. Zaron
interval, and, ideally, it is accompanied by an estimate of its errors. If the analysis is retrospective, i.e., it is the best estimate of the oceanic state at some past time conditioned upon measurements both before and after the analysis time, it is called a “re-analysis.” Typically the analysis is presented as a set of uniformly gridded oceanic state variables (sea-surface height, current vectors, temperature, salinity, etc.), on the same discrete grid as the ocean model. The analysis may be the end result of a forecast system, or it may provide input for the computation of other diagnostics, such as the computation of transport across transects. Sometimes the analysis is compared with new observations to either verify the analysis or assess the quality of the new observations. Analysis increment╇ The analysis increment is the difference between the analysis field and the background. Equivalently, the analysis increment is the correction to the background field which results in the optimal analysis. Background╇ The background state, sometimes called the “first guess,” is the prediction of the oceanic state prior to the assimilation of data. In the absence of other information, a climatology or other dynamics-free estimate of the ocean may serve as the background. Control variables╇ The control variables, sometimes simply called “the controls,” are the independent quantities to be estimated in the data assimilation. The dynamical model consists of a set of diagnostic or prognostic relations which relate the control variables to the state variables. There is not a unique partition between control variables and state variables, but the controls are generally regarded as inputs while the state is regarded as an output. For example, in the 4D-Var algorithm, the model’s initial conditions are regarded as the control variable; although, these same initial conditions and the resulting forecast may be regarded as state variables. In the Kalman Filter, the system noise is regarded as the control variable. Data assimilation╇ Data assimilation is the systematic methodology of incorporating information from an observing system into a dynamical model in such a manner that an optimality criterion is satisfied. Optimality criteria typically express a maximum likelihood or minimum mean square error criterion. In practice, many data assimilation systems find analysis states which only approximately satisfy the stated optimality criterion. This is usually considered acceptable because the optimality criteria are based on error models which are themselves approximate. Dynamical model╇ It is assumed that the state of the ocean is predicted or modeled by a set of dynamics, e.g., Newton’s Laws expressed in the usual formulations of continuum mechanics such as the Navier-Stokes equations or the shallow water equations. The dynamical model is assumed to be formulated as a mathematically well-posed initial-boundary-value problem. Error model╇ An error model is a description of the probability distribution of some possibly multivariate or field quantity. For example, an error model for a measure-
13â•… Introduction to Ocean Data Assimilation
345
ment of temperature might minimally declare that the errors have zero mean (are unbiased), known variance, σ, and are Gaussian distributed. An error model for an observing system would minimally consist of error models for the individual observations. An error model for a set of dynamics would minimally consist of error sub-models for the initial conditions, boundary conditions, and other model inhomogeneities. Each of these sub-models would be characterized by its own spacetime covariance structure, as appropriate. Generalized inversion╇ Because the oceanic state is, in principle, uniquely determined by the ocean dynamics, the addition of observational data in data assimilation makes the oceanic state an over-determined quantity. Alternately, if we consider the measurement error and the dynamics errors to be unknown quantities, which ought to be determined by the data assimilation, the problem of identifying both the oceanic state together with the error fields is an under-determined problem. From this perspective, data assimilation may be regarded as a generalized inversion of the ocean model dynamics. The generalized inverse of the dynamical model consists of the analysis fields, as well as estimates of the error fields, the statistics of which were specified a priori by the error models. Bennett (1992, 2002) uses this language to describe data assimilation, thus highlighting a unifying theme of mathematical inverse theory, statistical estimation, control theory, and non-parametric estimation methods. Innovation vector or residual vector╇ The innovation vector is the difference between the observation vector and an observation of the ocean state, written as yâ•›−â•›Hx in the notation of this article. Sometimes the background innovations, yâ•›−â•›Hxb, may be distinguished from the analysis innovations, yâ•›−â•›Hxa. Objective analysis╇ The technique of objective analysis—called optimal interpolation, statistical interpolation, and Gauss-Markov smoothing—was originally applied to create a set of consistently gridded fields from sparse observations (Bretherton et€al. 1976). When a background field is present, the corrections applied to the background may be called analysis increments. The technique has a close relationship with multivariate smoothing splines (Wahba 1990; Bennett 1992). Observing system╇ An observing system produces observations or measurements of some subset of variables characterizing the ocean. Observing systems are defined by a set of observation operators, also called measurement kernels, one per measurement, which mathematically represent the mapping from the oceanic state to a finite set of real values. State variables╇ State variables are those fields or quantities which characterize the oceanic state to be estimated. More abstractly, state variables are elements in the domain of the observation operators. State vector╇ A state vector is a finite-dimensional state variable. Even when the dynamics are represented by a set of partial differential equations, the computational implementation usually requires projection onto a finite-dimensional vector.
346
E. D. Zaron
Software and WWW Resources The Inverse Ocean Model (IOM) is a software toolkit designed to produce a custom variational data assimilation system from basic modeling components provided by the user (Bennett et€al. 2008). The software features a graphical user interface (GUI) which is used to select assimilation or analysis algorithms and monitor program execution. The IOM has been used successfully with parallel and serial codes, including structured and unstructured grid finite-differece models, and finite-element models (Muccino et€al. 2008). The website, http://iom.asu.edu, contains pedagogical material as well as software. Another software system, the Data Assimilation Research Testbed (DART), provides a framework for developing, testing, and distributing ensemble data assimilation methodologies (Anderson et€al. 2009). The DART system uses algorithms which do not require adjoint codes, so it has been used by a relatively large number of researchers and educators. The Tangent linear and Adjoint Model Compiler (TAMC) was developed by Giering and Kaminski (1998) as a source-to-source Fortran translator for the generation of tangent-linear and adjoint codes needed in variational data assimilation and sensitivity studies. The company, FastOpt, maintains an active presence in this field, and is a good source of information on latest developments and conferences (http://fastopt.com/). Another source of information and software is the ACTS-Adjoint Compiler project, which has created OpenAD, a tool for automatic differentiation of C and Fortran code (http://ww.mcs. anl.gov/OpenAD/). A recent highlight of their work is the development of adjoint codes from models which use domain decomposition via the parallel MPI library (Utke et€al. 2009). A brief search for “data assimilation” on the web will uncover many course materials and tutorials on data assimilation. One notable source is the European Center for Medium Range Weather Forecasting (ECMWF), which publishes online an excellent series of lecture notes on data assimilation and the use of satellite data. See http://www. ecmwf. int/newsevents/training/rcourse\_notes/ for their Meteorological Training Course Lecture Notes.
References Ambadan JT, Tang Y (2009) Sigma-point Kalman Filter data assimilation for strongly nonlinear systems. J Atmos Sci 66:261–285 Amodei L (1995) Solution approchee pour un probleme d’assimilation avec prise en compte l’erreur de modele. C R Acad Sci 321:1087–1094 Anderson J, Hoar T, Raeder K, Liu H, Collins N, Torn R, Avellano A (2009) The data assimilation research testbed: a community facility. Bull Am Meteorol Soc 90:1283–1296 Atlas R (1997) Atmospheric observations and experiments to assess their usefulness in data assimilation. J Meteorol Soc Japan 75:111–130
13â•… Introduction to Ocean Data Assimilation
347
Australian Bureau of Meterolology (2009) Ocean analysis. http://www.bom.gov.au/oceainography/analysis.shtml Baker NL, Daley R (2000) Observation and background adjoint sensitivity in the adaptive observation-targeting problem. Q J Roy Meteorol Soc 126:1431–1453 Balmaseda MA, Vidard A, Anderson DL (2007) The ECMWF System 3 Ocean Analysis System, ECMWF Technical Memorandum. Technical Report 7. European Center for Medium-Range Weather Forecasts Bennett AF (1992) Inverse Methods in Physical Oceanography, 1st edn. Cambridge University Press, New York, p€346 Bennett AF (2002) Inverse Modeling of the Ocean and Atmosphere. Cambridge University Press, New York, p€234 Bennett AF, Budgell W (1987) Ocean data assimilation and the Kalman filter: spatial regularity. J Phys Oceanogr 17:1583–1601 Bennett AF, Thorburn MA (1992) The generalized inverse of a nonlinear quasigeostrophic ocean circulation model. J Phys Oceanogr 22:213–230 Bennett AF, Chua BS, Ngodock H, Harrison DE, McPhaden MJ (2006) Generalized inversion of the Gent-Cane model of the Tropical Pacific with Tropical Atmosphere-Ocean (TAO) data. J Mar Res 64:1–42 Bennett AF, Chua BS, Pflaum BL, Erwig M, Fu Z, Loft RD, Muccino JC (2008) The inverse ocean modeling system. I: implementation. J Atmos Oceanic Technol 25:1608–1622 Bretherton F, Davis R, Fandry C (1976) A technique for objective analysis and design of oceanographic experiments applied to MODE-73. Deep Sea Res 23:559–582 Broquet G, Edwards CA, Moore AM, Powell BS, Veneziani M, Doyle JD (2009) Application of 4D-variational data assimilation to the California Current system. Dyn Atmos Oceans 48:69–92 Cane M, Kaplan A, Miller R, Tang B, Hackert E, Busalacchi A (1996) Mapping Tropical Pacific sea level: data assimilation via a reduced state space Kalman filter. J Geophys Res 101:22599– 22617 Chao Y, Li Z, Farrara J, Hung P (2009) Blending sea surface temperatures from multiple satellites and in situ observations for coastal oceans. J Atmos Oceanic Technol 26:1415–1426 Chapnik B, Desroziers G, Rabier F, Talagrand O (2006) Diagnosis and tuning of observational error in quasi-operational data assimilation setting. Q J Roy Meteorol Soc 132:543–565 Chua B, Bennett AF (2001) An inverse ocean modeling system. Ocean Modelling 3:137–165 Cohn SE, Da Silva A, Guo J, Sienkiewicz M, Lamich D (1998) Assessing the effects of data selection with the DAO physical-space statistical analysis system. Mon Wea Rev 126:2913–2926 Courtier P (1997) Dual formulation of four-dimensional assimilation. Q J Roy Meteorol Soc 123:2449–2461 Courtier P, Derber J, Errico R, Louis J, Vukicevic T (1993) Important literature on the use of adjoint, variational methods and the Kalman Filter in meteorology. Tellus 45A:342–357 Courtier P, Thepaut J, Hollingsworth A (1994) A strategy for operational implementation of 4DVar, using an incremental approach. Q J Roy Meteorol Soc 120:1367–1387 Cummings J, Bertino L, Brasseur P, Fukumori I, Kamachi M, Martin MJ, Mogensen K, Oke P, Testut CE, Verron J, Weaver A (2009) Ocean data assimilation systems for GODAE. Oceanography 22:96–109 Daescu DN, Navon IM (2008) A dual-weighted approach to order reduction in 4DVAR data assimilation. Mon Wea Rev 136:1026–1041 Daley R (1991) Atmospheric Data Analysis. Cambridge University Press, New York, p€457 De Mey P, Craig P, Kindle J, Ishikawa Y, Proctor R, Thompson K, Zhu J (2007) Towards the assessment and demonstration of the value of GODAE results for coastal and shelf seas models and forecasting systems, 2nd edn. GODAE white paper. http://www.godae.org/modules/documents/documents/GODAE-CSSWG-paper-ed2.pdf. Accessed 15 March 2010 Dee DP, daSilva AM (1999) Maximum-likelihood estimation of forecast and observations error covariance parameters. Part I: methodology. Mon Wea Rev 127:1822–1834 Desroziers G, Ivanov S (2001) Diagnosis and adaptive tuning of observation-error parameters in a variational assimilation. Q J Roy Meteorol Soc 127:1433–1452
348
E. D. Zaron
Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanography 22:81–95 Egbert GD, Bennett AF, Foreman M (1994) TOPEX/POSEIDON tides estimated using a global inverse model. J Geophys Res 99:24821–24852 Evensen G (1997) Advanced data assimilation for strongly nonlinear dynamics. Mon Wea Rev 125:1342–1354 Evensen G (2006) Data assimilation: the ensemble Kalman Filter. Springer, Berlin, p€280 Gelaro R, Zhu Y (2009) Examination of observation impacts derived from observing system experiments (OSEs) and adjoint models. Tellus 61A:179–193 Gelb A (ed) (1974) Applied Optimal Estimation. MIT Press, Cambridge Ghil M (1989) Meteorological data assimilation for oceanographers. Part I: description and theoretical framework. Dyn Atmos Oceans 13:171–218 Giering R, Kaminski T (1998) Recipes for adjoint code construction. ACM Trans Math Software 24:437–474. http://autodiff.com Girard D (1989) A fast Monte-Carlo cross-validation procedure for large least squares problems with noisy data. Numer Math 56:1–23 Golub G, Van Loan C (1989) Matrix Computations, 2nd edn. Johns Hopkins University Press, Baltimore, p€642 He R, Wilkin JL (2006) Tides on the Southeast New England shelf: a view from a hybrid data assimilative modeling approach. J Geophys Res 111:C08002 Heemink AW, Mouthaan EE, Roest MR, Vollebregt EA, Robaczewska KB, Verlaan M (2002) Inverse 3D shallow water flow modeling of the continental shelf. Cont Shelf Res 22:465–484 Hoffman RN, Ponte RM, Kostelich EJ, Blumberg A, Szunyogh I, Vinogradov SV, Henderson JM (2008) A simulation study using a local ensemble transform Kalman Filter for data assimilation in New York Harbor. J Atmos Oceanic Technol 25:1638–1656 Hofmann EE, Druon JN, Fennel K, Friedrichs M, Haidvogel D, Lee C, Mannino A, McClain C, Najjar R, Siewert J, O’Reilly J, Pollard D, Previdi M, Seitzinger S, Signorini S, Wilkin J (2008) Eastern U.S. continental shelf carbon budget: Integrating models, data assimilation, and analysis. Oceanography 21:86–104 Hollingsworth A, Lonnberg P (1986) The statistical structure of short-range forecast errors as determined from radiosonde data. Part I: the wind field. Tellus 38:111–136 Hutchinson MF (1989) A stochastic estimator of the trace of the influence matrix for Laplacian smoothing splines. Comm Statist Simulation Comput 18:1059–1076 Ide K, Courtier P, Ghil M, Lorenc AC (1997) Unified notation for data assimilation: operational, sequential and variational. J Meteorol Soc Japan 75:181–189 Jet Propulsion Laboratory (2009) JPL ECCO Ocean Data Assimilation. http://ecco.jpl.nasa.gov/ external/index.php. Kalnay E (2003) Atmospheric Modeling, Data Assimilation, and Predictability. Cambridge University Press, New York, p€341 Kurapov AL, Allen JS, Egbert GD, Miller RN, Kosro PM, Levine MD, Boyd T, Barth JA (2005) Assimilation of moored velocity data in a model of coastal wind-driven circulation off Oregon: multivariate capabilities. J Geophys Res 110:C10S08 Lardner RW, Al-Rabeh AH, Gunay N (1993) Optimal estimation of parameters for a two-dimensional hydrodynamical model of the Arabian Gulf. J Geophys Res 98:18229–18242 Lermusiaux PF, Robinson AR (1999) Data assimilation via error subspace statistical estimation. Part I: theory and schemes. Mon Wea Rev 127:1385–1407 Li Z, Chao Y, McWilliams J, Ide K (2008) A three-dimensional variational data assimilation scheme for the regional ocean modeling system. J Atmos Oceanic Technol 25:2074–2090 Logutov OG, Lermusiaux PF (2008) Inverse barotropic tidal estimation for regional ocean applications. Ocean Model 25:17–34 Lorenc A (1986) Analysis methods for numerical weather prediction. Q J Roy Meteorol Soc 112:1177–1194
13â•… Introduction to Ocean Data Assimilation
349
Losch M, Wunsch C (2003) Bottom topography as a control variable in an ocean model. J Atmos Oceanic Technol 20:1685–1696 McIntosh P (1987) Systematic design of observational arrays. J Phys Oceanogr 17:885–902 Miller RN, Ghil M, Gauthiez F (1994) Advanced data assimilation in strongly nonlinear dynamical systems. J Atmos Sci 51:1037–1056 Moore AM, Arango HG, Di Lorenzo E, Cornuelle BD, Miller AJ, Neilson DJ (2004) A comprehensive ocean prediction and analysis system based on the tangent linear and adjoint of a regional ocean model. Ocean Model 7:227–258 Moore AM, Arango HG, DiLorenzo E, Miller AJ, Cornuelle BD (2009) An adjoint sensitivity analysis of the Southern California Current circulation and ecosystem. J Phys Oceanogr 39:702–720 Mourre B, De Mey P, Lyard F, Le Provost C (2004) Assimilation of sea level data over continental shelves: an ensemble method for the exploration of model errors due to uncertainties in bathymetry. Dyn Atmos Oceans 38:93–121 Muccino JC, Bennett AF, Hubele NF (2004) Significance testing for variational assimilation. Q J Roy Meteorol Soc 130:1815–1838 Muccino JC, Arango H, Bennett AB, Chua BS, Cornuelle B, DiLorenzo E, Egbert GD, Hao L, Levin J, Moore AM, Zaron ED (2008) The inverse ocean modeling system. II: applications. J Atmos Oceanic Technol 25:1623–1637 National Center for Environmental Prediction (2009) Global ocean data assimilation system (GODAS). http://www.cpc.ncep.noaa.gov/products/GODAS Navon I, Legler D (1987) Conjugate-gradient methods for large-scale minimization in meteorology. Mon Wea Rev 115:1479–1502 Ngodock HE, Smith SR, Jacobs GA (2009) Cycling the representer method with nonlinear models. In: Park SK, Xu L (eds) Data assimilation for atmospheric, oceanic, and hydrologic applications. Springer, Berlin, pp€321–340 Oke PR, Sakov P (2008) Representativeness error of oceanic observations for data assimilation. J Atmos Oceanic Technol 25:1004–1017 Oke PR, Allen JS, Miller RN, Egbert GD, Kosro PM (2002) Assimilation of surface velocity data into a primitive equation coastal ocean model. J Geophys Res 107:3122 Paduan JD, Shulman I (2004) HF radar data assimilation in the Monterey Bay area. J Geophys Res 109. doi:10.1029/2003JC001949 Powell BS, Arango HG, Moore AM, DiLorenzo E, Milliff RF, Foley D (2008) 4DVAR data assimilation in the Intra-American Sea with the Regional Ocean Modeling System (ROMS). Ocean Mod 23:130–145 Powell BS, Moore AM, Arango HG, DiLorenzo E, Milliff RF, Leben RR (2009) Near realtime ocean circulation assimilation and prediction in the Intra-American Sea with ROMS. Dyn Atmos Oceans 48:16–45 Purser RJ, Wu W-S, Parrish DF, Roberts NM (2003a) Numerical aspects of the application of recursive filters to variational statistical analysis. Part I: spatially homogeneous and isotropic Gaussian covariances. Mon Wea Rev 131:1524–1535 Purser RJ, Wu W-S, Parrish DF, Roberts NM (2003b) Numerical aspects of the application of recursive filters to variational statistical analysis. Part II: spatially inhomogeneous and anisotropic general covariances. Mon Wea Rev 131:1536–1548 Rabier F, Jarvinen H, Klinker E, Mahfouf JF, Simmons A (2000) The ECMWF operational implementation of four-dimensional variational assimilation. I: experimental results with simplified physics. Q J Roy Meteorol Soc 126:1143–1170 Richman JG, Miller RN, Spitz YH (2005) Error estimates for assimilation of satellite sea surface temperature data in ocean climate models. Geophys Res Lett 32:L18608. Ross S (2005) A First Course in Probability, 7th edn. Prentice Hall, New York Stewart LM, Dance SL, Nichols NK (2008) Correlated observation errors in data assimilation. Int J Numer Meth Fluids 56:1521–1527 Strong GM (2007) Udana. Forgotten Books, New York, pp€68–69
350
E. D. Zaron
Szunyogh I, Kostelich EJ, Gyarmati G, Kalnay E, Hunt BR, Ott E, SatterïňĄeld E, Yorke JA (2008) A local ensemble transform Kalman filter data assimilation system for the NCEP global model. Tellus A 60:113–130 Talagrand O (1997) Assimilation of observations, an introduction. J Meteorol Soc Japan 75:191– 209 Talagrand O (1999) A posterior verification of analysis and assimilation algorithms. Proceedings of a Workshop on Diagnosis of Data Assimilation Systems. ECMWF, Reading, UK Talagrand O, Courtier P (1987) Variational assimilation of meteorological observations with the adjoint vorticity equation I, theory. Q J Roy Meteorol Soc 113:1311–1328 Todling R, Cohn SE (1994) Suboptimal schemes for atmospheric data assimilation based on the Kalman filter. Mon Wea Rev 122:2530–2557 University of Maryland (2009) Simple Ocean Data Assimilation (soda). http://www.atmos.umd. edu/~ocean/ Utke J, Hascoet L, Heimbach P, Hill C, Hovland P, Naumann U (2009) Toward adjoinable MPI. Proceedings of the 10th IEEE International Workshop on Parallel and Distributed Scientific and Engineering, PDSEC-09. http://doi.ieeecomputersociety.org/10.1109/IPDPS.2009.5161165 Wahba G (1990) Spline models for observational data. SIAM publications, Philadelphia, pp€169 Wahba G, Johnson DR, Gao F, Gong J (1995) Adaptive tuning of numerical weather prediction models: randomized GCV in three- and four-dimensional data assimilation. Mon Wea Rev 123:3358–3369 Weaver A, Courtier P (2001) Correlation modelling on the sphere using a generalized diffusion equation. Q J Roy Meteorol Soc 127:1815–1846 Wunsch C (1996) The Ocean Circulation Inverse Problem. Cambridge University Press, New York Zaron ED (2006) A comparison of data assimilation methods using a planetary geostrophic model. Mon Wea Rev 134:1316–1328 Zhang S, Harrison MJ, Rosati A, Wittenberg A (2007) System design and evaluation of coupled ensemble data assimilation for global oceanic climate studies. Mon Wea Rev 135:3541–3564 Zou X, Navon IM, Berger M, Phua KH, Schlick T, LeDimet FX (1993) Numerical experience with limited-memory, quasi-newton methods for large-scale unconstrained nonlinear minimization. SIAM J Optimization 3:582–608
Chapter 14
Adjoint Data Assimilation Methods Andrew M. Moore
Abstract╇ The use of adjoint methods in data assimilation is reviewed, and illustrative examples are presented.
14.1╅Introduction Adjoint operators are central to many operational data assimilation systems used for numerical weather prediction, and are gaining popularity in oceanography also. In this chapter we shall review the use of adjoint methods for data assimilation. We begin in Sect.€14.2 with an exploration of the concept of the adjoint of a linear operator, and the important properties that make it an indispensible tool for data assimilation. Familiar illustrative examples are used throughout to highlight the important ideas. The fundamental concepts underpinning 4-dimensional variational data assimilation (4D-Var) are reviewed in Sect.€14.3, and in Sect.€14.4 example 4D-Var calculations for the California Current System are presented using the Regional Ocean Modeling System (ROMS).
14.2â•…What Is an Adjoint Operator? Adjoints exist only for linear operators. The concept of an adjoint operator is best illustrated by considering first the discrete form of linear operators and functions, namely matrices and vectors. The following is a brief exposé on adjoint operators, but an excellent in-depth description can be found in the classic text by Lanczos (1961).
A. M. Moore () Ocean Sciences Department, University of California, Santa Cruz, CA 95064, USA e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_14, ©Â€Springer Science+Business Media B.V. 2011
351
352
A. M. Moore
14.2.1 Spaces Any continuous linear operator in function space has a discrete analog in the form of a matrix. Similarly, any continuous function has a discrete analog in the form of a vector. With this is mind, consider the Nâ•›×â•›M rectangular matrix A. The matrix A operates on the set of vectors u of length M and yields a set of vectors w of length N, so that wâ•›=â•›Au. So we say that A maps from a space of dimension M (“M-space”) to a space of dimension N (“N-space”). The adjoint of the operator A can be identified with the matrix transpose, namely AT. The formal connection between the matrix transpose and an adjoint operator will be made in Sect.€14.2.2, but for now identifying the adjoint as the matrix transpose will suffice. The adjoint AT is a Mâ•›×â•›N matrix and operates on the set of vectors v of length N to yield the set of vectors z of length M, namely zâ•›=â•›ATv. So we say that the adjoint maps from N-space to M-space. Suppose that we wish to solve the system of linear equations yâ•›=â•›Ax given A and y. This represents a system of N equations for the M unknown elements of x. If Nâ•›<â•›M, the system is said to be underdetermined since there are fewer equations than unknowns. In this case we might ask whether a unique or meaningful solution exists for x? The answer is probably yes, and is given by the so-called “natural solution” for which the adjoint operator plays a critical role. Suppose we search for a solution of the form xâ•›=â•›ATs. While the vector x resides in M-space, the vector s resides in N-space, so we are effectively restricting our search for solutions to N-space, the space in which the known vector y resides. We have now reduced the problem to solving yâ•›=â•›AATs which is well posed since AAT is a Nâ•›×â•›N matrix that maps s to the N known elements of y. The solution xâ•›=â•›ATs is called the natural solution, and s is referred to as the generating function. As we shall see, generating functions are important players in some approaches to data assimilation. Let us consider a familiar geophysical example in which the natural solution plays a critical role. component of relative vorticity of a fluid element is The vertical given by ς = ∂v ∂x − ∂u ∂y where (↜u,v) are the x and y components of velocity. Suppose we are given a field of values of ς at discrete points (↜x,y) such as on the grid of a numerical model. This is the discrete analog of a function which we will denote by the vector ς where each element of ς represents a grid point value of ς . Given the field ς in N-space, how do we find the corresponding velocity components (u,v) in M-space, where Mâ•›=â•›2N in this example, and u and v are the vectors of grid point values of u and v respectively? This is an underdetermined linear system for which a natural solution exists of the form: u = AT s v where A = ( − ∂/∂y ∂/∂x), and ς = AA T s = −(∂ 2 /∂y2 ∂ 2 /∂x2 )s. If we identify s = −ψ we recover the familiar equation ς = ∇ 2 ψ relating vorticity ς to stream function ψ arising from Helmholtz theorem for a horizontally non-divergent flow. This example reveals that stream function is the generating function for vorticity for a horizontally non-divergent flow.
14â•… Adjoint Data Assimilation Methods
353
Recall that the Nâ•›×â•›M matrix A has an operator equivalent A in function space, so for the case Nâ•›<â•›M the function A acts only on part of the function space. We say that only some of the dimensions of the function space are activated by A. In the discrete case at most only N of the possible M dimensions are activated by the matrix A. More generally, the activated dimensions of A correspond to the pâ•›<â•›N eigenvectors of AAT with non-zero eigenvalues i. The remaining non-activated dimensions with iâ•›=â•›0, pâ•›<â•›iâ•›≤â•›N are referred to as the “null space.” The Mâ•›×â•›N adjoint operator AT identifies the activated part of the M-space and ignores the null space. The natural solution yâ•›=â•›AATs therefore represents the solution that exists only in the activated dimensions of A. In the parlance of linear algebra, AAT is the projection onto the subspace spanned by the range of A. In the theory of linear differential equations, the natural solution is also called the particular integral. Solutions that reside in the null space satisfy the equation Axâ•›=â•›0 and in the theory of linear differential equations are referred to as the complementary function. The general solution of any linear differential equation or equivalent discrete linear system is the sum of the particular integral (natural solution) and the complementary function. In both function space and discrete space the adjoint operator identifies the space in which these two parts of the general solution reside. As we shall see later, we can use this important property of adjoint operators for data assimilation. Before concluding this discussion of vector spaces, it is instructive to consider the Nâ•›×â•›M matrix A where now Nâ•›>â•›M. The corresponding linear system yâ•›=â•›Ax may be over determined or constrained since there are more equations than there are unknown elements of x. Recall that the adjoint AT maps vectors from N-space to M-space where the solution for x resides, so it is tempting to solve the system ATyâ•›=â•›ATAx. In this case ATA is the projection on the subspace spanned by AT. It is easy to show that the solution of this system minimizes (Axâ•›−â•›y)T(Axâ•›−â•›y) and is the familiar least squares solution for an over determined system. Thus we see that the adjoint operator plays a critical role in identifying the least squares solution of over determined systems as well.
14.2.2 Operator Adjoints To complete the connection between operator adjoints and matrices, it is instructive to return to function space. In function space, we will denote a linear operator as A and the adjoint of the operator as A+. Consider the functions u and w so that wâ•›=â•›Au which is the continuous analog of the discrete case considered in Sect.€ 14.2.1. For any two functions u and v, there will in general exist an inner-product and an associated norm which we will denote by {v,w}. An adjoint operator is always associated with a particular inner-product and is defined by {v,Au}â•›=â•›{A+v,u}, often referred to as the Green’s identity. The adjoint operators of different innerproducts are in fact linearly related. To illustrate, suppose we let {v,w} represent the inner-product of the Euclidean norm, and define a different inner-product as
354
A. M. Moore
(↜v,w)â•›=â•›{v,Mw} where for now M is a linear, self-adjoint (i.e. M+â•›=â•›M), invertible operator. The adjoint of A with respect to the new inner-product will be denoted A† and is defined by the Green’s identity (↜v,Au)â•›=â•›(↜A†v,u)â•›=â•›{M−1A+Mv,Mu}, which shows that A†â•›=â•›M−1A+M. In the discrete case, the Green’s identity is referred to as the bilinear identity, and the inner-product of function space is replaced by a dot-product, so that for the Euclidean norm we have {v,w}â•›≡â•›vTw. For wâ•›=â•›Au, the bilinear identity for the adjoint becomes vTAuâ•›=â•›(ATv)Tuâ•›=â•›uTATv, showing that for the Euclidean norm the discrete equivalent of the adjoint operator A+ is the matrix transpose AT. If A is a Nâ•›×â•›M matrix, v and u reside in N-space and M-space respectively. While the dot-product vTwâ•›=â•›vTAu is evaluated in N-space it is unique since zâ•›=â•›ATv and in Mspace uTzâ•›=â•›uTATvâ•›=â•›vTAuâ•›=â•›vTw. Exercise 1:╇ If AT is the adjoint of the operator represented by the square matrix A with respect to the Euclidean norm vTw, derive an expression for the adjoint operator A with respect to the norm vTMw where M is a symmetric, invertible matrix, and show that A â•›=â•›M−1ATM.
14.2.3 An Illustrative Example The ideas of Sects.€14.2.1 and 14.2.2 are best illustrated using a simple yet familiar geophysical example. Consider a rectangular, homogeneous, flat bottomed ocean of undisturbed depth H, in the form of a rotating channel that spans the Cartesian domain 0â•›≤â•›xâ•›≤â•›1, 0â•›≤â•›yâ•›≤â•›1, that is periodic in x, and subject to zero normal flow boundary conditions on the circulation at yâ•›=â•›0 and yâ•›=â•›1. 14.2.3.1â•…The Linear Shallow Water Equations We will consider first the case of linear waves in an ocean in which the circulation is described by the linear shallow water equations: (14.1) ∂u ∂t − fv = −g∂h ∂x
∂v ∂t + fu = −g∂h ∂y
(14.2)
∂h ∂t + ∂(Hu) ∂x + ∂(H v) ∂y = 0
(14.3)
where (↜u,v) are the components of velocity in the x and y directions, h is the sea surface displacement, f╛=╛f(↜y) is the Coriolis parameter, H is a constant undisturbed depth, and g is the acceleration due to gravity. The zero normal flow boundary conditions correspond to v╛=╛0, at y╛=╛0 and y╛=╛1, while periodicity in x requires that u(0,y)╛=╛u(1,y), v(0,y)╛=╛v(1,y), and h(0,y)╛=╛h(1,y). Recall that the adjoint of
14â•… Adjoint Data Assimilation Methods
355
Eqs.€(14.1)–(14.3) depends on the choice of an inner-product. A natural inner-product for the shallow water equations is that which yields the energy norm:
E=
0
1
0
1
1 1 2 H u + v2 + gh2 dxdy. 2 2
(14.4)
If we introduce the shorthand notation sâ•›=â•›(↜u,v,h) then Eqs.€ (14.1)–(14.3) can be written as stâ•›+â•›Asâ•›=â•›0, where the subscript denotes differentiation with respect to time, and the operator A is given by: 0 −f g∂ ∂x 0 g∂ ∂y . A = f (14.5) 0 H ∂ ∂x H ∂ ∂y The adjoint of (14.1)–(14.3) is defined by the Green’s identity, which for the innerproduct associated with the energy norm can be written as: 1 1 1 1 + s MAsdxdy = sMA+ s + dxdy (14.6) 0
0
0
0
where s+â•›=â•›(↜u+,v+,h+) is a function in the space on which the adjoint A+ operates, and Mâ•›=â•›diag(↜H,H,g). To apply (14.6) to the shallow water equations, consider 11 I = 0 0 H u+ × (14.1) + H v + × (14.2) + gh+ × (14.3)dxdy = 0. After integration by parts, it is easy to show that the adjoint of (14.1)–(14.3) is given by −st+ + A+ s + = 0, where the negative time derivative indicates that time is reversed. The adjoint operator A+ is given by: 0 −f g∂ ∂x 0 g∂ ∂y A+ = − f (14.7) 0 H ∂ ∂x H ∂ ∂y where the adjoint variables are periodic in x and satisfy the boundary conditions v+â•›=â•›0, at yâ•›=â•›0 and yâ•›=â•›1. In addition, s and s+ must satisfy the condition: 1 1 ∂ + + + H (uu + vv ) + ghh dxdy =0 (14.8) ∂t 0 0 which is equivalent to time invariance of the inner-product {s+,Ms}. Comparing (14.5) and (14.7) shows that A+â•›=â•›−A with respect to the energy norm. In addition, the adjoint equation −st+ + A+ s + = 0 also satisfies the zero normal flow and periodic boundary conditions that are identical to those imposed on (14.1)– (14.3). Since A+â•›=â•›−A, the adjoint equation can also be written as −st+ − As + = 0. Exercise2:╇ Using I = 01 01 H u+ × (14.1) + H v+ × (14.2) + gh+ × (14.3)dxdy = 0, derive the adjoint shallow water operator given by (14.7), and show that as a consequence of Iâ•›=â•›0, the adjoint equation −st+ + A+ s + = 0 must satisfy (a) zero nor
356
A. M. Moore
mal flow boundary conditions at yâ•›=â•›0 and yâ•›=â•›1, and (b) the condition given by Eq.€(14.8). Wave solutions of stâ•›+â•›Asâ•›=â•›0 given by (14.1)–(14.3) take the form of eastward and westward propagating inertia-gravity waves and Rossby waves (Gill 1982). Similarly wave solutions exist for the adjoint equation −st+ − As + = 0 (recall A+â•›=â•›−A) but with opposite phase and group velocities because of the reversal of time. For example, long wavelength Rossby waves that carry energy westward in the shallow water equations stâ•›+â•›Asâ•›=â•›0 will carry energy eastward in the adjoint equations −st+ − As + = 0. The adjoint equation can also be written as st+ + As + = 0 showing that it is mathematically equivalent to Eqs.€ (14.1)–(14.3). Therefore, for the same initial conditions the solutions sâ•›=â•›(↜u,v,h) and s+â•›=â•›(↜u+,v+,h+) will be identical, in which case Eq.€(14.8) is an expression of energy conservation.
14.2.3.2â•…The Linear Shallow Water Equations in the Presence of a Mean Circulation Consider now the case of linear waves in the same periodic rectangular ocean which is now in motion with a mean geostrophic circulation u = (− g f ) ∂h ∂y. The linearized shallow water equations in this case are: ∂u ∂t + u∂u ∂x + v∂u ∂y − fv = −g∂h ∂x (14.9)
∂v ∂t + u∂v ∂x + fu = −g∂h ∂y
(14.10)
u) ∂x + ∂(H v) ∂y = 0 ∂h ∂t + ∂(H
(14.11)
= H + h, and h is the sea surface displacement associated with the geowhere H strophic circulation. Using the same compact form as before, we can express (14.9)–(14.11) as stâ•›+â•›Asâ•›=â•›0, where the operator A is now given by: u∂ ∂x −f +∂u ∂y g∂ ∂x g∂ ∂y . f A= (14.12) u∂ ∂x H ∂ ∂x H ∂ ∂y + ∂ H ∂y 0
Applying the Green’s identity and using the energy norm, the adjoint equation is of the form −st+ + A+ s + = 0 where now: u∂ ∂x −f g∂ ∂x g∂ ∂y . A+ = − f − ∂u (14.13) ∂y u∂ ∂x H ∂ ∂x H ∂ ∂y + ∂ H ∂y 0
14â•… Adjoint Data Assimilation Methods
357
As before, Eq.€(14.8) is a necessary condition, and the adjoint variables are periodic in x and satisfy the boundary condition v+â•›=â•›0, at yâ•›=â•›0 and yâ•›=â•›1. In this case, the y-gradient of the mean circulation u can act as a source of energy for linear waves, and is the familiar source of barotropic instability if the waves can undergo sustained exponential growth. In this case, solutions of st +As = 0 and st+ − A+ s + = 0 with the same initial conditions will no longer be identical, and energy is no longer conserved. Sustained exponential growth will occur if the potential vorticity gradient ∂f/∂y − ∂ 2 u/∂ 2 y changes sign anywhere within the channel (Pedlosky 1987). u+ × (14.9) + H v+ × (14.10) + gh+ × (14.11)dxdy = 0, Exercise 3:╇ Using I = 01 01 H derive the adjoint shallow water operator given by (14.13), and show that as a consequence of Iâ•›=â•›0, the adjoint equation −st+ + A+ s + = 0 must satisfy (a) zero normal flow boundary conditions yâ•›=â•›0 and yâ•›=â•›1, and (b) the condition given by Eq.€(14.8).
14.3â•…Variational Data Assimilation 14.3.1 Notation Before proceeding to describe the important role of adjoint operators in variational data assimilation methods, it is necessary to introduce some notation. Ocean models solve the discrete equations of motion on grids in both space and time. There are a wide range of ocean models available (see chapters by Barnier and Chassignet), and many use different kinds of grid configurations and coordinate systems (e.g., staggered grids, terrain following vertical coordinates, isopycnal vertical coordinates, etc). Nonetheless, all models solve for a standard set of prognostic variables, typically temperature, salinity, velocity, and free surface displacement, and can be expressed in a generic symbolic form using the concept of a state-vector. To this end we introduce a state-vector x(↜ti) which represents a vector of all grid point values of the prognostic state variables in space at a given time ti. The ocean state x(↜ti) will depend on the state at some earlier time x(↜ti−1) and upon the ocean surface forcing f(↜ti) and boundary conditions b(↜ti) over the time interval [ti−1,ti]. The discrete equations of motion will in general be nonlinear, and can be represented by the discrete nonlinear operator M. Thus the time evolution of the ocean state by an ocean model can be expressed in a convenient and compact form as:
x(ti ) = M(ti , ti−1 )(x(ti−1 ), f(ti ), b(ti ))
(14.14)
where M(↜ti,ti−1) denotes a forward integration of the nonlinear ocean model from time ti−1 to ti and for convenience f(↜ti) and b(↜ti) denote the forcing and boundary conditions over the entire interval [ti−1,ti]. This notation is fairly standard in numerical weather prediction and ocean modeling (Ide et€al. 1997).
358
A. M. Moore
14.3.2 The Incremental Formulation As discussed in the chapters by Zaron and Brasseur, the aim of data assimilation is to construct an estimate of the ocean circulation by combining prior information from an ocean model with observations. The solution of an ocean model over the interval tâ•›=â•›[t0,tN] is uniquely determined by the initial conditions, x(↜t0), the surface forcing, f(↜t), and the boundary conditions, b(↜t), collectively referred to as control variables. Therefore, prior estimates of all control variables are required, and will be denoted xb(↜t0), fb(↜t) and bb(↜t) respectively. Our hypothesis about the uncertainty associated with each prior is embodied in the prior error covariance matrices for the initial conditions, B, the surface forcing, Bf, and boundary conditions, Bb. For convenience, we will denote by D the block diagonal covariance matrix comprised of B, Bf, and Bb, namely Dâ•›=â•›diag(B, Bf, Bb). Similarly, we will denote by y the vector comprised of all observations in the interval tâ•›=â•›[t0,tN] with the associated observation error covariance matrix R. The goal of data assimilation is then to identify z = (xT (t0 ), f T (t0 ), f T (t1 ), . . . , f T (tN ), bT (t0 ), bT (t1 ), . . . , bT (tN ))T , the so-called control vector, that maximizes the conditional probability p(z|y) ∝ e−JN L , where:
JNL (z) =
1 T −1 1 z D z + (ϕ − y)T R−1 (ϕ − y) 2 2
(14.15)
and φ is the vector of the model equivalent of the observations at the observation times and locations. Each element of φ is of the form Hj(x(↜tj)) where Hj is the operator that transforms or interpolates the state-vector x(↜tj) to the observation points at time tj. The scalar JNL is called the penalty function or cost function. Since the model M and observation operator H are in general nonlinear, JNL is a non-quadratic form so it may be convex and there may be multiple values of z that minimize p(z|y). Therefore, it is common to linearize M and H by considering small increments z to the prior zb, so that zâ•›=â•›zbâ•›+â•›z (Courtier et€al. 1994), where zb = (xbT (t0 ), fbT (t0 ), fbT (t1 ), . . . , bTb (t0 ), bTb (t1 ), . . . )T is the vector of priors. The assumption underlying this approximation is that zb does not lie too far from the true state of the ocean, in which case the goal of data assimilation then becomes one of finding δz = (δxT (t0 ), δf T (t0 ), δf T (t1 ), . . . , δbT (t0 ), δbT (t1 ), . . . )T , the vector of control variable increments, that minimizes the linearized form of (14.15), namely:
J (δz) =
1 T −1 1 δz D δz + (Gδz − d)T R −1 (Gδz − d) 2 2
(14.16)
where the matrix G is the operator that maps the control variable increments to the observation points. The vector dâ•›=â•›yâ•›−â•›φb is called the innovation vector, where φb is the vector (↜Hj(xb(↜tj))). The increment za that minimizes J corresponds to the maximum value of p(δz|y), and is often called a maximum likelihood estimate. The maximum likelihood ocean state estimate is then given by the analysis or socalled posterior zaâ•›=â•›zbâ•›+â•›za. In the event that the prior hypotheses embodied in D and R are correct, then the theoretical minimum value of the cost/penalty function is Jminâ•›=â•›Nobs/2, half the total number of observations (Bennett 2002).
14â•… Adjoint Data Assimilation Methods
359
The prior circulation estimate xb(↜t) during the interval t╛=╛[t0,tN] is assumed to be a solution of the model Eq.€(14.14) forced by the prior fb(↜t) and subject to the prior boundary conditions bb(↜t). Under the assumption that the prior is already a good estimate of the circulation, the increments z will be small compared to zb, in which case a good approximation for x(↜t) will be the first-order Taylor expansion of (14.14), namely:
δx(ti ) = M(ti , ti−1 )δu(ti−1 )
(14.17)
where M(↜ti,ti−1) represents the linearization of M in (14.14) about the time evolving prior xb(↜t), and δu(ti−1 ) = (δxT (ti−1 ), δf T (ti ), δbT (ti ))T . Equation€(14.17) is referred to as the tangent linear model, since solutions x(↜ti) are locally tangent to the solution xb(↜ti) of (14.14). The operator G in (14.16) is a convolution in time of the tangent linear model M and H (the linearization of the observation operator H), and represents solutions of (14.17) evaluated or mapped to the observation points. The increment za corresponding to the most likely state estimate satisfies the condition ∂J /∂δz = 0, and is given by zaâ•›=â•›Kd where K is called the gain matrix and given by: −1 K = D−1 + GT R−1 G GT R−1 . (14.18) Equivalently, the gain matrix can also be written as: −1 K = DGT GDGT + R .
(14.19)
In both (14.18) and (14.19), evaluation of the gain matrix involves a matrix inverse. In (14.18), the matrix to be inverted is (D−1 + GT R −1 G) and has the dimension of z which will generally be greater than the number of model grid points and may be very large and a challenge to invert. The dimension of z is Nmâ•›=â•›(↜Nxâ•›+â•›Nfâ•›+â•›Nb) where Nx, is the dimension of x, and Nf and Nb are the dimensions of f, and b, respectively, multiplied by the number of model time steps N in the interval tâ•›=â•›[t0,tN]. In addition, the expression in (14.18) in parentheses involves D−1 which may also be difficult to evaluate. Eq.€(14.18) is often referred to as the primal form of the gain matrix. Alternatively the matrix to be inverted in (14.19) is (GDGT + R) which has a dimension equal to that of y, the number of observations, Nobs. In general, Nobs Nm , so (14.19) may be more convenient to use than (14.18). Equation€(14.19) is often referred to as the dual form of the gain matrix. In practice, both (14.18) and (14.19) are used for ocean data assimilation and in either case the most likely state of the ocean circulation is given by zaâ•›=â•›zbâ•›+â•›Kd. Exercise 4:╇ Prove that zaâ•›=â•›zbâ•›+â•›Kd when ∂J /∂δz = 0, and that K is given by (14.18). Exercise 5:╇ Using the identity (A+B)−1â•›=â•›A−1(A−1â•›+â•›B−1)−1B−1, show that K can also be expressed in the dual form (14.19). Regardless of whether the primal or dual form of K is used, the matrix inverse in (14.18) and (14.19) is never explicitly evaluated. Instead, an equivalent system of
360
A. M. Moore
linear equations is solved, and the adjoint GT of the tangent linear model sampled at the observation points, G, plays a crucial role in this process. The adjoint of the tangent linear model can be expressed symbolically as:
δu+ (ti−1 ) = MT (ti−1 , ti )δx+ (ti )
(14.20)
where the reversed order of the time arguments of MT compared to M in (14.17) indicates that integration is backwards in time, as in the shallow water examples of Sect.€14.2.3. The matrix GT is a time convolution of the adjoint model MT and HT, the adjoint of the linearized observation operator. Since identification of the most likely increment Kd is equivalent to identifying the condition ∂J /∂δz = 0, the resulting data assimilation methods are collectively referred to as 4-dimensional variational (4D-Var) data assimilation, where the four dimensions are space and time. As in Sect.€14.2.1, there are two fundamental spaces, the primal space of dimension Nm, and dual space of dimension Nobs. The tangent linear operator G maps a vector from primal space to dual space, while the adjoint of the tangent linear operator GT maps a vector from dual space to primal space.
14.3.3 Primal Space 4D-Var It is important to realize that while (14.18) and (14.19) are written in terms of matrix products, the matrices involved are never explicitly computed, and all matrix manipulations are performed using models, including models for D and R. This leads to very useful iterative algorithms that can be used to identify the minimum of J(↜z) or JNL(z). With this in mind, consider the derivative of J(↜z) in (14.16) with respect to z: (14.21) ∂J ∂δz = D−1 δz + GT R−1 (Gδz − d) . Equation€(14.21) shows that the gradient of the cost/penalty function with respect to z can by evaluated by (1) running the tangent linear model subject to z to evaluate Gz, the tangent linear model solution sampled at the observation points, (2) running the adjoint of the tangent linear model, GT, forced by R−1(Gz-d), the weighted difference between Gz and the innovation vector, d, and (3) adding D−1z to the result of (2). Therefore, the cost/penalty function (14.16) can be minimized iteratively as follows: 1. Choose an initial starting value of z. 2. Run the nonlinear model (14.14) with the prior initial conditions, xb(↜t0), prior forcing, fb(↜t), and prior boundary conditions, bb(↜t), and compute the prior circulation estimate xb(↜t) for the interval tâ•›=â•›[t0,tN]. 3. Run the tangent linear model (14.17) linearized about xb(↜t) from step 2 for the interval tâ•›=â•›[t0,tN], compute J(↜z) from (14.16), and compute and save R−1(Gz-d).
14â•… Adjoint Data Assimilation Methods
361
4. Run the adjoint of the tangent linear model (14.20) backwards in time linearized about xb(↜t) from step 2 and forced by R−1(Gz-d) from step 3 for the interval tâ•›=â•›[tN,t0]. 5. Add to the adjoint solution from step 4 the vector D−1z to yield ∂J /∂δz according to (14.21). 6. Using the cost function gradient ∂J /∂δz from step 5, use a conjugate gradient method to identify a new z that will reduce the value of J resulting from a subsequent run of the tangent linear model as in step 3. 7. Using the new z from step 6, repeat steps 3–6 until the minimum of J has been identified. At the minimum of J, the gradient ∂J /∂δz = 0 and the iterative procedure described by steps 2–7 is equivalent to evaluating Kd using the primal form of the gain matrix in (14.18).
14.3.4 Dual Space 4D-Var In the dual formulation of the optimal increment Kd, the matrix in parentheses in (14.19) is inverted by introducing an intermediate variable w = (GDGT + R)−1 d, where Kdâ•›=â•›DGTw. In practice, w is identified by solving the linear system (GDGT + R)w= d using an iterative conjugate gradient method to minimize the function I = 1 2wT (GDGT + R)w − wT d, and w plays the role of a generating function as discussed in Sect.€14.2.1. The increment Kd is then identified according to DGTw. The following iteration algorithm is typical of those commonly used: 1. Choose an initial starting value of w. 2. Run the nonlinear model (14.14) with the prior initial conditions, xb(↜t0), prior forcing, fb(↜t), and prior boundary conditions, bb(↜t), and compute the prior circulation estimate xb(↜t) for the interval tâ•›=â•›[t0,tN]. 3. Run the adjoint of the tangent linear model (14.20) backwards in time linearized about xb(↜t) from step 2 and forced by w for the interval tâ•›=â•›[tN,t0] to yield GTw. 4. Apply the prior covariance D to the adjoint model solution from step 3 at tâ•›=â•›0 to yield DGTw. 5. Run the tangent linear model (14.17) linearized about xb(↜t) from step 2 for the interval tâ•›=â•›[t0,tN] using the result of step 4 as the initial condition to yield GDGTw. 6. Add Rw to the result of step 5, and evaluate the gradient ∂I ∂w = (GDGT + R)w − d. 7. Using the gradient ∂I /∂w from step 6, use a conjugate gradient method to identify a new w that will reduce the value of I resulting from a subsequent reevaluation of steps 3–5. 8. Using the new w from step 7, repeat steps 3–7 until the minimum of I has been identified. 9. Having identified the wâ•›=â•›wa that minimizes I, evaluate the increment zaâ•›=â•›Kdâ•›=â•›DGTwa by repeating steps 3 and 4 using wa.
362
A. M. Moore
u
a GT
b DGT
c GDGT
d Fig. 14.1↜渀 A schematic illustrating the action of the adjoint, prior error covariance, and tangent linear operators on a δ function located at the site of a single observation in a steady zonal shear flow represented by the blue arrows. a The initial location of the δ function in dual space. b The action of the adjoint operator GT which propagates the δ function backwards in time “upstream” and maps it into primal space. c The prior error covariance matrix smoothes GTδ in primal space. d The final action of tangent linear operator G propagates the smoothed field from step (c) forward in time and maps the field back to dual space, indicated by the red open circle located at the observation point
The connection between primal and dual space, and the role that G and GT play in transforming between one space and the other is probably best illustrated by considering the operations represented by steps 3, 4 and 5 when applied to a single observation. Consider a -function at the time and location of the single observation, in which case steps 3–5 yield GDGT. The sequence of operations that lead to this result are illustrated schematically in Fig.€14.1 for the case of a mean geostrophic flow in the form of a jet in the zonal channel considered in Sect.€14.2.3.2. In this example, advection is the dominant dynamical process, although there will be some wave propagation as well as illustrated in Fig.€14.1b. Hence the actions of GT and G primarily advect information upstream and downstream respectively.
14.3.5 Computation of za Having identified the increment control vector za using either the primal or dual form of 4D-Var, it then remains to compute the most likely circulation estimate xa(↜t)╛=╛xb(↜t)╛+╛xa(↜t) over the interval t╛=╛[t0,tN]. Two approaches are generally used: (1) using the non-linear model (14.14) to advance the circulation x(↜t0)╛=╛xb(↜t0)╛+╛xa(↜t0) forward in time using fb(↜t)+fa(↜t) and bb(↜t)+ba(↜t), or (2) using the tangent linear model (14.17) forced by fa(↜t) and subject to ba(↜t) to advance the increments
14â•… Adjoint Data Assimilation Methods
363
x(↜t) in time. Approach (1) is generally used in primal space applications of 4D-Var (Courtier et€al. 1994), while both (1) and (2) are used in dual space formulations (Da Silva et€al. 1995; Egbert et€al. 1994). To the author’s knowledge there are no primal formulations that use (2). Of course in the case of a linear model, (1) and (2) are equivalent. In the dual formulation, each element of the innovation vector d is assumed to be a linear combination of the elements of the state-vector increment x, according to the tangent linear operator G, and it is the generating function w of Sect.€14.3.4 that identifies the activated part of primal space into which d maps. All linear functions of x are known as the dual of x, and possess the important property that the tangent linear model equivalent of each element dj of d can be expressed as rTj δx, the so-called Riesz representation theorem. The time dependent vectors rj are called representer functions, and approach (2) in the dual formulation is equivalent to expressing the increment xa(↜t) as Sc, where S is a Nmâ•›×â•›Nobs matrix, and each column of S is a representer function r. The vector c is comprised of the weights assigned to each of the Nobs representers (Bennett 2002). In the schematic of Fig.€14.1d, the pattern of colored contours is a representer function, and the open red circle is GDGT, the representer function sampled at the observation point.
14.3.6 Strong Constraint Versus Weak Constraint In the formulations of 4D-Var presented so far, it has been implicitly assumed that the most likely circulation estimate xa(↜t) is an exact solution of the nonlinear model Eq.€(14.14). This is tantamount to assuming that the model is perfect and free of errors, and the increments za(↜t) that minimize the cost/penalty function J in (14.16) are said to be subject to the “strong constraint” imposed by model dynamics (Sasaki 1970). Of course, all models possess errors and uncertainties, and to account for these it is necessary to augment the control vector of increments z so that: δz = δxT (t ), δf T (t ), δf T (t ), . . . , δbT (t ), δbT (t ), . . . , η(t ), η(t ), . . .T (14.22) 0 0 1 0 1 0 1 where η(↜t) represents the corrections for model error at each grid point and time step. The prior for model error is assumed to be 0 with an associated error covariance matrix Q. In the presence of model error, the development of 4D-Var in primal and dual space of Sects.€14.3.1–14.3.5 is unchanged, except that now the prior error covariance matrix is given by the block diagonal matrix Dâ•›=â•›diag(B,Bf,Bb,Q). The most likely circulation estimate xa(↜t) in this case is no longer an exact solution of the non-linear model Eq.€(14.14), and the increments za(↜t) that minimize the cost/ penalty function J(↜z) in (14.16) are said to be subject to the “weak constraint” imposed by model dynamics (Sasaki 1970). Under the weak constraint, the dimension, Nm, of primal space increases by NxN, and in general the weak constraint 4D-Var problem becomes intractable. However, the dimension of dual space is unchanged by the imposition of the weak constraint, so weak constraint 4D-Var is often performed using the dual formulation.
364
A. M. Moore
14.3.7 Inner- and Outer-Loops Following the incremental formulation of 4D-Var described in Sect.€14.3.2, the most likely circulation is that which minimizes the cost function J(↜z) given by (14.16). However, the assumption underlying (14.16) is that the prior is close to the true circulation. This of course is a big assumption, and in all likelihood will not always be true, if ever. Therefore, it is preferable to identify the most likely circulation estimate that minimizes instead the cost function JNL(z) in (14.15). In practice, JNL(z) is a non-quadratic function of z because the state-vector is a solution of the nonlinear model (14.14). As a result, JNL(z) may possess multiple minima, and a global minimum corresponding to the most likely circulation may be difficult to identify. However, a common technique for identifying the minima of (14.15) is to solve a sequence of linear minimizations of the form (14.16), where each member of the sequence is referred to as an “outer-loop.” During each outer-loop, J(↜z) given by (14.16) is minimized using the iterative algorithms of Sect.€14.3.3 or Sect.€14.3.4, and each iteration is called an “inner-loop.” During the first outer-loop, the tangent linear model (14.17) and adjoint model (14.20) are linearized about the prior circulation estimate xb(↜t) over the interval tâ•›=â•›[t0,tN]. At the end of the first outer-loop, the circulation estimate is updated using approach (1) or (2) of Sect.€14.3.5, to yield the state vector, x1(↜t)â•›=â•›xb(↜t)â•›+â•›x1(↜t), forcing, f1(↜t)â•›=â•›fb(↜t)â•›+â•›f1(↜t), boundary conditions, b1(↜t)â•›=â•›bb(↜t)â•›+â•›b1(↜t), and in the weak constraint case the corrections for model error, η1(↜t), where the subscript “1” refers to the first outer-loop. During the second outer-loop, the tangent linear and adjoint models are linearized about x1(↜t). Repeating the sequence of outer-loops, it is easy to see that during outer-loop n, the tangent linear and adjoint models are linearized about xn−1(↜t), where xn−1(↜t) is the updated circulation estimate forced by fn−1(↜t), and subject to bn−1(↜t) and ηn−1(↜t). It is important to note, however, that the innovation vector never changes, and dâ•›=â•›yâ•›−â•›(↜Hj(xb(↜tj))) is always computed using the prior circulation estimate xb(↜tj). The method used to compute the circulation estimate xn(↜t) at the end of each outer-loop varies. The most common approach is to use the non-linear model as in n approach (1) of Sect.€14.3.5 to advance xn (t0 ) = xb (t0 ) + i=1 δxi (t0 ) forward in time. However, in the dual formulation in which the method of representers is used, approach (2) is modified, and the finite-amplitude tangent linear model is used to advance xn(↜t0) forward in time. The finite-amplitude tangent linear model can be expressed symbolically as:
xn (ti ) = M(ti , ti−1 )(xn−1 (ti−1 ), fn−1 (ti ), bn−1 (ti )) + Mn−1 (ti , ti−1 )(gn (ti−1 ) − gn−1 (ti−1 ))
(14.23)
where gn (ti ) = (xnT (ti ), fnT (ti ), bTn (ti ))T , and Mn−1 denotes the tangent linear model linearized about xn−1(↜t). During the first outer-loop, the solution of (14.23) reduces to the sum of the prior, xb(↜t), and the tangent linear model solution (14.17). The representer approach to 4D-Var in dual space using the algorithm of Sect.€14.3.4 for the inner-loops, and updating the circulation estimates in the outer-loops us-
14â•… Adjoint Data Assimilation Methods
365
ing (14.23) is equivalent to minimizing JNL(z) in (14.15) by solving the non-linear Euler-Lagrange equations. Full details of this approach are beyond the scope of this article but can be found in Bennett (2002).
14.4â•…Examples of 4D-Var for the California Current We will present here some illustrative examples of 4D-Var using both the primal and dual formulations applied to the California Current circulation using the Regional Ocean Modeling System (ROMS).
14.4.1 The Regional Ocean Modeling System (ROMS) ROMS is a primitive equation ocean model that has gained considerable popularity in recent years because of the great flexibility that it affords for modeling different regions of the world ocean. ROMS uses a curvilinear orthogonal coordinate system in the horizontal, and terrain-following coordinates in the vertical, both of which allow for increased resolution in regions where it is most needed (e.g., in regions of complex topography and bathymetry, in shallow water, and near the ocean surface). ROMS is a hydrostatic model, and employs a wide range of user-controlled options for the numerics and physical parameterizations, as well as a range of options for prescribing open boundary conditions. A detailed description of ROMS is beyond the scope of this article, and the reader is referred to Shchepetkin and McWilliams (2005) and Haidvogel et€al. (2000) for more information. ROMS is a community ocean model and is freely available from http://www.myroms.org.
14.4.2 ROMS 4D-Var There are many practical aspects of 4D-Var that will not be discussed here, but which nonetheless are very important. The main features of the ROMS 4D-Var system are listed below, along with references that provide more information. In addition, the ROMS 4D-Var system is described in detail by Moore et€al. (2011a, b, c). The main features and attributes of ROMS 4D-Var can be summarized as follows: 1. Incremental formulation (Courtier et€al. 1994). 2. Primal and dual formulations (Courtier 1997). 3. Primal formulation is referred to as Incremental 4D-Var, hereafter I4D-Var. 4. Dual formulations following the Physical-space Statistical Analysis System of Da Silva et€al. (1995), hereafter 4D-PSAS, and the indirect representer method of Egbert et€al. (1994), hereafter R4D-Var. 5. Strong constraint formulation for both primal and dual formulations.
366
A. M. Moore
6. Weak constraint for dual formulations only. 7. Inner-loops use a preconditioned Lanczos formulation of the conjugate method (Golub and van Loan 1989; Lorenc 2003; Fisher and Courtier 1995; Tshimanga et€al. 2008). 8. Prior covariances D are modeled using a pseudo-heat diffusion equation approach (Derber and Bouttier 1999; Weaver and Courtier 2001), and a multivariate balance operator (Weaver et€al. 2005). 9. MPI parallel architecture.
14.4.3 The California Current System The California Current System (CCS) is a prototype eastern boundary current regime that is dominated by mesoscale eddies, and characterized by a pronounced seasonal cycle of coastal upwelling and primary productivity. A comprehensive review of the CCS can be found in Hickey (1998). ROMS has been configured for the CCS (hereafter referred to as ROMS-CCS) and spans the domain 116W–134W, 30N–48N shown in Fig.€14.2. Several configu-
48 5000 46
4500
44
4000
42
3500
40
3000 2500
38
2000
36
1500 34
1000
32 30 –134
500 –132
–130
–128
–126
–124
–122
–120
–118
–116
Fig. 14.2↜渀 The ROMS-CCS domain and bathymetry in meters on a grid with 10€km horizontal resolution
14â•… Adjoint Data Assimilation Methods
367
rations of ROMS-CCS exist, with horizontal resolutions ranging from 3 to 30€km, and with 30–42 levels in the vertical. ROMS-CCS is forced with near surface air data from the Coupled Ocean Atmosphere Mesoscale Prediction System (COAMPS) of Doyle et€al. (2009) which are converted to surface wind stress and surface fluxes of heat and freshwater using the bulk flux formulations of Fairall et€al. (1996a, b) and Liu et€al. (1979). The model domain has open boundaries at the northern, western, and southern edges, and at these boundaries the temperature, salinity, and velocity fields are prescribed using data from the Estimating the Circulation and Climate of the Ocean project (ECCO) (Wunsch and Heimbach 2007) which is a global ocean data assimilation product at 1° resolution. The radiation conditions of Chapman (1985) and Flather (1976) are also imposed at the open boundaries on the free surface and vertically integrated velocity respectively. ROMS-CCS produces very good simulations of the CCS circulation as documented by Veneziani et€al. (2009).
14.4.4 ROMS-CCS 4D-Var Configuration ROMS-CCS has been used extensively for 4D-Var as described by Broquet et€al. (2009a, b, 2011) and Moore et€al. (2011b, c). Some example 4D-Var calculations using ROMS-CCS are presented here to illustrate the ideas and concepts introduced in Sects.€14.2 and 14.3. Recall that for data assimilation we require prior estimates for the elements of the control vector z, namely for the initial conditions, xb(↜t0), the surface forcing, fb(↜t), and the boundary conditions, bb(↜t), for the interval tâ•›=â•›[t0,tN]. In the case of weak constraint 4D-Var, the prior for the corrections for model error is the null vector 0(↜t). ROMS-CCS 4D-Var is usually run sequentially using data assimilation windows in time that span time intervals tN−t0 that are typically 4–14 days long. Each data assimilation window is referred to as a “cycle,” and the initial condition prior, xb(↜t0), is the best circulation estimate from the end of the previous cycle. The priors for surface forcing, fb(↜t), are the surface fluxes derived from the COAMPS air data, and the priors for the boundary conditions, bb(↜t), are the open boundary data from ECCO. In addition to the prior fields, prior error covariance matrices are also required, namely, B for the initial conditions, Bf for the surface forcing, Bb for the boundary conditions, and Q for the model errors in the case of weak constraint 4D-Var. In general, the prior error covariance matrices vary in time, but in practice they are assumed to be time invariant. Following Weaver et€ al. (2005), each of the prior error covariance matrices are factorized into a block diagonal, univariate correlation matrix, a diagonal matrix of standard deviations, and a multivariate dynamical balance operator. The univariate correlation matrix is modeled as the solution of a pseudo-heat diffusion equation (Weaver and Courtier 2001). This is an involved procedure, and the prescription of each prior error covariance matrix is beyond the scope of this presentation. However, a full description for ROMS-CCS can be found in Broquet et€al. (2009). For the present purpose, it suffices to say that the prior er-
368
A. M. Moore
ror covariance matrices can be prescribed for each of the prior components of the control vector zb. Observations from various platforms are assimilated into ROMS-CCS, including satellite derived sea surface temperature (SST) and sea surface height (SSH), and sub-surface hydrographic measurements of temperature, T, and salinity, S, collected from shipboard CTDs, XBTs, and drifting Argo profiling floats. The data used were extracted from the rigorously quality controlled global ocean data archive of Ingleby and Huddleston (2007). The observation error covariance matrix, R, is assumed to be time invariant and diagonal (i.e. spatially and temporally uncorrelated observation errors), and the error variances are instrument and variable dependent. The elements of R are a reflection of several sources of error: instrument error, interpolation errors introduced by the observation operator H of Sect.€14.3.2, and errors of representativeness. The largest source of error is typically the error of representativeness, which is a measure of the uncertainty associated with the ability of a single ocean observation to describe the circulation in a single ocean model grid cell. The following observation error standard deviations were used in ROMS-CCS: ~2€cm for SSH; ~0.4C for SST; ~0.1C for T; ~0.01 for S. 14.4.4.1â•…Primal Versus Dual 4D-Var The first example considered will contrast the performance of the primal and dual formulations of ROMS 4D-Var in the CCS, and results are shown from the ROMSCCS configuration with a horizontal resolution of 30€km and 30 levels in the vertical. The cost function from two strong constraint calculations using 1 outer-loop and 75 inner-loops, is shown Fig.€14.3 for a 4 day assimilation window spanning the period 1–4 July, 2000, during which time there were ~1â•›×â•›104 observations available, most in the form of satellite data. In one calculation the primal formulation I4D-Var was used, while in the second calculation the dual formulation R4D-Var was employed. Figure€14.3 shows that in both cases the cost function decreases with an increasing number of inner-loop iterations. In the case of I4D-Var, J(↜z) decreases monotonically, and asymptotes to a near constant value after about 40 or so inner-loops. R4D-Var on the other hand exhibits very different behavior in which J(↜z) undergoes quite large fluctuations until after about 60 inner-loops it asymptotes to a similar near constant value as that reached by I4D-Var. The difference in behavior of J(↜z) in the primal and dual formulations of 4D-Var is associated with the different approach employed to identify the cost function minimum. In the primal case, J(↜z) is minimized directly as described in Sect.€14.3.3, while in the dual formulation J(↜z) is minimized indirectly using a generating function w according to Sect.€14.3.4. El Akkraoui and Gauthier (2009) have explored the different behavior of the primal and dual formulations and suggest some effective remedies for increasing the rate of convergence of J(↜z) in the dual case. Nonetheless, despite the remarkable difference between the primal and dual formulations in the rate of convergence of J(↜z) to its asymptotic minimum value,
14â•… Adjoint Data Assimilation Methods
369
5.5 I4D-Var Strong R4D-Var Strong R4D-Var Weak
log10(J)
5
4.5
4
3.5
10
20
30 40 50 Number of inner-loops
60
70
Fig. 14.3↜渀 Log10(↜J(↜z)) (from (14.16)) versus the number of inner-loops for three 4D-Var assimilation calculations for the period 1–4 July, 2000 for the case of 1 outer-loop. The three experiments shown are from a strong constraint calculation in primal space (I4D-Var, red curve), a strong constraint calculation in dual space (R4D-Var, blue curve), and a weak constraint calculation in dual space (R4D-Var, black curve). The dashed curve shows the theoretical minimum value of the cost function, Jmin, that will be reached only if all of the prior hypotheses embodied in D are correct
Fig.€14.3 indicates that as expected from the equivalence of (14.18) and (14.19), both calculations yield the same solution. A comparison of the most likely CCS circulation estimates from 4D-Var in both cases (not shown) confirms that the primal and dual form ultimately yield the same circulation after 75 inner-loops. In both cases, however, the minimum value reached by J(↜z) is larger than Jmin indicating that either the prior hypotheses D and R are incorrect, or that the global minimum value of JNL(z) has not been reached. The influence of different combinations of the number of outer-loops and the number of inner-loops on the final value of JNL(z) in (14.15) that is reached at the end of the 1–4 July, 2000 strong constraint I4D-Var cycle is shown in Table 14.1. Table 14.1 reveals that updating the circulation estimate about which the tangent linear and adjoint models are linearized is clearly beneficial for reducing the value of JNL(z) for the same computational effort (i.e. when the total combined number of outer-loops and inner-loops is the same). In addition, increasing the number of outer-loops yields a value of JNL(z) that is closer to Jmin.
370
A. M. Moore
Table 14.1↜渀 Values of JNL(z) from (14.15) that are reached at the end of a 4 day strong constraint I4D-Var data assimilation experiment spanning the period 1–4 July, 2000 for different combinations of the number of outer-loops, n, and number of inner-loops, m. In each case, the total number of iterations, nâ•›×â•›m, is the same. The value of Jmin in each case is 0.52â•›×â•›104 nâ•›×â•›m 1â•›×â•›100 2â•›×â•›50 3â•›×â•›33 4â•›×â•›25 10â•›×â•›10 JNL(z) 1.18â•›×â•›104 0.93â•›×â•›104 0.87â•›×â•›104 0.71â•›×â•›104 1.56â•›×â•›104
0.12
SSH error
RMS Error
0.1 0.08 0.06 0.04
no assim 4D-Var forecast
0.02 0
RMS Error
2 1.5
01/02/99
01/02/00
01/01/01
01/01/02
01/01/03
01/01/04
12/31/04
01/01/01
01/01/02
01/01/03
01/01/04
12/31/04
SST error
1 0.5 0 01/02/99
01/02/00
Fig. 14.4↜渀 Time series of the root mean square (rms) difference between the model and observations for SST and SSH for period Jan 1999–Dec 2004. The blue curve shows the rms differences averaged over 14 day intervals from the model forced only by the prior forcing and subject to the prior boundary conditions. The red curve shows results from a case where primal I4D-Var was applied sequentially over 14 day assimilation windows, and the rms differences are the average over each 14 day window. The control vector was comprised of only the initial conditions in this case. The green curve shows the rms difference associated with the prior circulation estimate xb(↜t), and is also a 14 day average. Since the prior circulation is the result of a model run started from the most likely circulation estimate at the end of each I4D-Var window, it can also be thought of as a “forecast” of the circulation for the subsequent 14 day period
An example of the overall performance of 4D-Var when applied sequentially is shown in Fig.€14.4 which shows the root mean square (rms) difference between ROMS-CCS and the observations for SST and SSH. The result from three calculations are shown corresponding to: (1) no data assimilation, when ROMS-CCS is forced only by the prior forcing fb(↜t) and subject to the prior boundary conditions bb(↜t); (2) primal I4D-Var applied sequentially using 14 day assimilation windows; and (3) the prior circulation estimates xb(↜t) which arise from runs of the model initialized from the most likely circulation estimate at the end of the previous assimilation cycle. The period considered in each case is 1 Jan 1999–31 Dec 2004. Figure€14.4 shows that I4D-Var has a positive impact on the agreement of the model
14â•… Adjoint Data Assimilation Methods
371
with the observations during each data assimilation cycle, and on agreement with the prior. 14.4.4.2╅The Control Vector Influence In the strong constraint examples of 4D-Var presented in Fig.€14.3, the control vector z was comprised of the model initial conditions, x(↜t0), the surface forcing, f(↜t), and the boundary conditions, b(↜t), for the interval t╛=╛[t0,tN]. However, it is instructive to explore the relative impact of each component of z on the cost function. To this end, Fig.€14.5 shows the results of a sequence of 4D-Var calculations in which the control vector was successively augmented with different control variables.
8
x 104 No assim
7
i.c. only i.c.,wind
6
i.c.,wind,Q i.c.,wind,Q,(E-P)
5
J
i.c.,wind,Q,(E-P),b.c. i.c.,wind,Q,(E-P),b.c., weak
4 3 2 1 0 A
B
C
D Experiment
E
F
G
Fig. 14.5↜渀 The cost function J(↜z) in (14.16) after 1 outer-loop and 75 inner-loops from several dual R4D-Var experiments for the 4 day period 1–4 July, 2000. Expt. A: no data assimilation; Expt. B: z comprised of the initial condition increments x(↜t0) only; Expt. C: z comprised of only x(↜t0) and the surface wind stress components of f(↜t); Expt. D: z comprised of only x(↜t0) and the surface wind stress and heat flux components of f(↜t); Expt. E: z comprised of x(↜t0) and all components of f(↜t); Expt. F: z comprised of x(↜t0), f(↜t), and b(↜t), Expt. G: same as F except using the weak constraint as described in Sect.€14.4.4.3. The dashed line indicates the theoretical minimum value of the cost function, Jmin
372
A. M. Moore
Figure€14.5 reveals that as control vector increment z is augmented with more components of the surface forcing and boundary conditions, the minimum value reached by the cost function J(↜z) progressively decreases. This is because the length of z is increasing at the same time meaning that there are more degrees of freedom available to 4D-Var with which to fit the observations. It is important to realize, however, that the 4D-Var process is not linear, so if the order of the calculations in Fig.€14.5 is changed, the same progression of changes in J(↜z) will not necessarily be obtained. Nonetheless, Fig.€14.5 does suggest that accounting for the errors and uncertainties in the initial conditions has by far the largest impact on the efficacy of the circulation estimate measured in terms of J(↜z). Significant further reductions in J(↜z) are afforded when uncertainties in surface forcing are accounted for also. However, Fig.€14.5 suggests that uncertainties in the boundary conditions have the least impact on J(↜z). 14.4.4.3╅Weak Constraint 4D-Var Model errors are a significant source of uncertainty in model-derived circulation estimates, and it is important to account for these uncertainties during 4D-Var. However, quantifying and identifying the sources of model error is one of the greatest challenges in ocean data assimilation. In ROMS-CCS, we have made some progress towards identifying some of the most significant impacts of model error, but our understanding is far from complete. A dominant feature of the CCS circulation is the occurrence of coastal upwelling in the spring and summer along much of the California, Oregon and Washington coasts which is driven by equatorward alongshore winds. ROMS-CCS simulates the seasonal cycle of upwelling very well (Veneziani et€al. 2009) although in the absence of data assimilation, comparisons of the model with observations indicates that the model SST is biased compared to the observed temperatures during the peak of the upwelling season. Independent investigations of the quality of the surface wind forcing by Doyle et€al. (2009) have revealed that the COAMPS priors, fb(↜t), used to drive ROMS-CCS agree well with wind observations from satellite scatterometers and ocean buoys. Therefore, it seems likely that the model bias in ROMS-CCS SST is associated with model errors rather than errors in the surface forcing. Further evidence for this hypothesis is provided by the work of Broquet et€al. (2009b, 2011) using strong constraint 4D-Var. They found that data assimilation leads to a reduction in strength of the upwelling favorable alongshore surface winds during spring and summer, and a general degradation of the wind prior when compared to satellite-derived surface wind observations. An important conclusion of their work, therefore, is that in the absence of corrections for model error, data assimilation may yield undesirable and non-physical corrections to potentially all elements of the control vector in an attempt to minimize the cost function. Based on the findings of Broquet et€ al. (2009b, 2011) attempts are currently underway to account for model error in ROMS-CCS during 4D-Var using the weak constraint. The impact of imposing a weak constraint during R4D-Var on the cost function J(↜z) is also shown in Fig.€14.3. Based on the known bias of the ROMSCCS SST in spring and summer, it was assumed that the model error affecting SST
14â•… Adjoint Data Assimilation Methods
373
is present mainly in the model temperature equation, and confined to within a short distance of the coast. To account for such errors during 4D-Var, then using (14.14) we let x(ti ) = M(ti , ti−1 )(xt (ti−1 ), f t (ti ), bt (ti )) + εq at each model time step where xt, ft, and bt are the true state, forcing, and boundary conditions, and εq represents the model error. A model error covariance matrix of the form Q = εq , εTq = WB was assumed, where B is the initial condition prior error covariance matrix, and W is a diagonal rescaling matrix. All elements of W are zero except those corresponding to ocean temperature grid points within 300€km of the North American coast. The non-zero elements of W are of the form (1-d/300), where â•›=â•›0.05 is a variance scaling factor, and d is the distance measured from the coast in km. The choice for represents a standard deviation for the prior model error εq in temperature that is ~22% of that of the error in the initial condition prior. Figure€14.3 shows that the weak constraint R4D-Var cost function J(↜z) asymptotes to a lower value compared to the strong constraint case, and is closer to the theoretical minimum value Jmin. Additional experiments have revealed that the minimum value of J(↜z) reached during weak constraint R4D-Var decreases with increasing . However, for â•›>â•›0.05 the resulting circulation estimates possess features that are very unphysical. Nonetheless, these preliminary attempts at accounting for uncertainties due to model during 4D-Var are encouraging. Figure€14.5 also shows the impact of a weak constraint on J(↜z) in relation to the sequence of experiments described in Sect.€14.4.4.2 where the control vector is successively augmented. Experiment G is the same as that shown in Fig.€14.3, but only the minimum value reached by J(↜z) is shown. Figure€14.5 indicates that augmenting the increment control vector z with η(↜t) leads to a further reduction in J(↜z), and J(↜z) is closer still to Jmin.
14.4.5 4D-Var Diagnostics Having successfully computed the most likely circulation estimate using 4D-Var, there are a number of diagnostic calculations that can be performed using the 4DVar output that are of considerable interest. Two examples are presented here that provide information about the accuracy of the resulting circulation in the form of the posterior error covariance, and information about the impact of each individual observation on different physical aspects of the circulation. The calculations presented below are greatly facilitated by the Lanczos formulation of the conjugate gradient algorithm that is used to minimize J(↜z) in the ROMS 4D-Var algorithms (Fisher and Courtier 1995). Specifically, the primal form T T −1 of the gain matrix (14.18) can be expressed as K = Vp T−1 p Vp G R , where each column of the matrix Vp is a primal Lanczos vector (Golub and van Loan 1989), and Tp is a tridiagonal matrix. Each inner-loop yields one additional member of the Lanczos sequence which form an orthonormal basis, and after m inner-loops the matrix Vp has dimension Nmâ•›×â•›m. Similarly, the dual form of the gain matrix (14.19) T can be expressed as K = DGT Vd T−1 d Vd where Vd is a matrix of dual Lanczos vectors of dimension Nobsâ•›×â•›m.
374
A. M. Moore
14.4.5.1â•…Posterior Error If zt represents the control vector describing the true state of the ocean, then the error covariance matrix of the posterior estimate zaâ•›=â•›zbâ•›+â•›za is given by Eaâ•›=â•›E((za−â•›zt) (za−â•›zt)T) where E is the expectation operator. It is easy to show that in the case where the observation errors and prior errors are uncorrelated, the posterior error covariance matrix can be expressed as:
(14.24)
Ea = (I − KG)D(I − KG)T + KRKT
where K is the gain matrix of Sect.€14.3.2. Exercise 6:╇ Using the definitions Eaâ•›=â•›E((za−â•›zt)(za−â•›zt)T) and Dâ•›=â•›E((zb−â•›zt)(zb−â•›zt)T) and the dual form of K in (14.19), derive Eq.€(14.24) for the posterior error covariance matrix assuming uncorrelated prior errors and observation errors. Figure€14.6 shows an example of the reduction in the uncertainty in SST and subsurface temperature as a result of data assimilation using R4D-Var. In this case, ROMS-CCS with 10km horizontal resolution and 42 vertical levels was used. The diagonal elements of D and Ea represent the prior and posterior error variance respectively of each control variable. Therefore, the diagonal of the difference matrix ∆â•›=â•›(Ea−â•›D) represents the change in error variance of the prior due to 4D-Var. Negative values of the diagonal of ∆ represent posterior grid point variables that are more certain than the prior, while zero values indicate that the posterior and prior are equally uncertain. Figure€14.6 shows the diagonal elements of ∆ corresponding to SST and 75€m temperature, and indicates that the posterior upper ocean tem130º
45º
40º
35º
30º
125º
120º
0.00 –0.03 –0.06 –0.09 –0.12 –0.15 –0.18 –0.21 –0.24 –0.27 –0.30 –0.33 –0.36 –0.39 –0.42 –0.45 –0.48 –0.51 –0.54 –0.57 –0.60 –0.63 –0.66 –0.69 –0.72 –0.75 –0.78 –0.81 –0.84 –0.87 –0.90
130º
45º
40º
35º
30º
125º
120º
0.00 –0.03 –0.06 –0.09 –0.12 –0.15 –0.18 –0.21 –0.24 –0.27 –0.30 –0.33 –0.36 –0.39 –0.42 –0.45 –0.48 –0.51 –0.54 –0.57 –0.60 –0.63 –0.66 –0.69 –0.72 –0.75 –0.78 –0.81 –0.84 –0.87 –0.90
Fig. 14.6↜渀 Posterior variance-prior variance for SST (↜left) and 75€m temperature (↜right) for a typical R4D-Var calculation in ROMS-CCS with 10€km resolution
14â•… Adjoint Data Assimilation Methods
375
perature is more certain than that of the prior over large areas of the ROMS-CCS domain, particularly in the region of high eddy kinetic energy along the California central coast identified in satellite observations by Kelly et€al. (1998). However, a degree of caution must be exercised when interpreting Ea using the reduced rank approximation of K based on the Lanczos vectors described above. Ea in (14.24) describes the expected posterior error covariance only in the case where the priors D and R are correct, and when 4D-Var is run to complete convergence. In general, neither of these conditions will be satisfied, and since the number of inner-loops m is typically « Nobs, the Lanczos vectors Vd may span only a small portion of observation space. This results in an over estimate of the posterior error variances as demonstrated by Moore et€al. (2011b). 14.4.5.2â•…Observation Impacts It is of considerable interest to know which observations or which types of observations exert the greatest influence on particular physical aspects of the most likely circulation estimate derived from 4D-Var. For example, Fig.€14.7 shows a vertical section of the time average of the alongshore velocity over the upper 500€m of the water column along 37N for the period 1 Jan 1999–31 Dec 2004 from two ROMSCCS calculations: one in which no data were assimilated, and one in which data were assimilated sequentially every 14 days using strong constraint I4D-Var. In the latter case, the increment control vector is comprised only of x(↜t0) as described in Broquet et€ al. (2009a). Clearly the assimilation of observations has a significant impact on the structure of the CCS at this latitude. In order to quantify the impact of each individual observation on the circulation, consider the transport at time t across the section shown in Fig.€14.7 for the posterior circulation of a typical 4D-Var cycle, namely Ia(↜t)â•›=â•›hTxa(↜t), where h is a vector with zero elements everywhere except for those elements that correspond to velocity grid points that contribute to the transport normal to the section. The non-zero elements take the form of the transformation coefficients required to rotate the total velocity in the alongshore direction, and the appropriate area elements for each grid cell in the vertical. However, recall that the circulation estimate xa(↜t)â•›=â•›xb(↜t)â•›+â•›xa(↜t), in which case Ia(↜t)â•›=â•›hTxb(↜t)â•›+â•›hTxa(↜t)â•›=â•›Ib(↜t)â•›+â•›hTxa(↜t). Therefore, the difference in transport ∆I(↜t)â•›=â•›Ia(↜t)−Ib(↜t) between the posterior and the prior circulation estimates is given by ∆I(↜t)â•›=â•›hTxa(↜t). However, recall that to first-order xa(↜t)â•›=â•›M(↜t,t0)Kd where M(↜t,t0) is the tangent linear model. The difference in transport between the posterior and the prior can then be expressed as:
I (t) = hT M(t, t0 )Kd = dT KT MT (t0 , t)h
(14.25)
where MT(↜t0,t) is the adjoint model. For each data assimilation cycle, (14.25) can therefore be used to compute the contribution of each observation, represented by the individual elements of d, to the change ∆I(↜t) in the prior transport associated with 4D-Var.
376
A. M. Moore 0
– 0.05
z (m)
–100 –200 –300 –400 –500 –127
a
–126 –0.1
–125
–124
–0.05
–123 0
–122 0.05
0.1
0
z (m)
–100 –200 –300 –400 –500 –127
b
–126 –0.1
–125 –0.05
–124
–123 0
–122 0.05
0.1
Fig. 14.7↜渀 Summer time average (July–September) vertical sections of the alongshore component of velocity from ROMS-CCS during the period 1 Jan 2000–31 Dec 2004 for the case a where there is no data assimilation, and b where data are assimilated sequentially every 14 days using strong constraint I4D-Var. (From Broquet et€al. 2009a)
Figure€14.8 shows an example calculation from a 7 day strong constraint R4DVar calculation for the period 5–11 April, 2003. Figure€ 14.8b indicates that the change in the prior estimate transport along 37N on 11 April as a result of assimilating ~1.5â•›×â•›104 observations is ~0.75Sv. A comparison of Fig.€14.8a, b, however, reveals that even though satellite observations account for about 94% of the total number of observations, about 63% of the change in prior transport is associated with the subsurface observations which account for only 6% of Nobs. In this case then, the subsurface observations exert a considerable influence on the change in the circulation along 37N despite the large number of satellite observations.
14â•… Adjoint Data Assimilation Methods
2
377
x 104 5–11 April, 2003
Number of Obs
1.5 1 0.5
a
0
Nobs NSST NSSH
NT
NS
CalCOFI
NT
NS
GLOBEC
1
NT
NS
ARGO
NT XBT
11 April, 2003 Sv 0.5
SSH
b
SST
–0.5
Total
0
T
S
CalCOFI
T
S
GLOBEC
T
S
ARGO
T XBT
Fig. 14.8↜渀 Panel a shows a histogram of the total number of observations (↜Nobs) and the number of observations from different platforms during the period 5–11 April, 2003. NSST and NSSH represent the number of satellite measurements of SST and SSH respectively, while NT and NS represent the number of subsurface temperature and salinity observations from various sources (e.g., the CalCOFI and GLOBEC/LTOP repeat sample arrays along the California and Oregon coasts respectively, Argo profiling floats, and miscellaneous XBTs). Panel b shows the difference ∆I in transport between the posterior and prior circulation estimates (labeled “Total”) on 11 April, 2003, and the contribution to ∆I of all the observations from the different observation platforms. The red box highlights all of the subsurface observations and their contribution to ∆I
The impact of each individual observation can be computed for any aspect of the circulation that can be expressed as a differentiable function of the model statevector x. Other examples for ROMS-CCS are described by Moore et€al. (2011c).
14.5â•…Summary In this chapter, the important ideas and concepts underpinning the use of adjoint methods for variational data assimilation are summarized, and where possible illustrative pedagogic examples and examples from a practical, ROMS-based, 4DVar system are presented. In the interests of brevity, there are however, many im-
378
A. M. Moore
portant practical and technical details that we have shamelessly glossed over here or ignored, but interested readers can pursue them further by consulting the references provided herein. This presentation is also very heavily biased towards the ROMS 4D-Var system, but it should be noted that comprehensive 4D-Var systems have developed, or are currently under development, for various other ocean models. Notable examples that are well documented in the scientific literature include efforts in France and Europe (Weaver et€al. 2003) and in the U.S. (Stammer et€al. 2002).
References Bennett AF (2002) Inverse modeling of the ocean and atmosphere. Cambridge University Press, Cambridge Broquet G, Edwards CA, Moore AM, Powell BS, Veneziani M, Doyle JD (2009a) Application of 4D-variational data assimilation to the California current system. Dyn Atmos Oceans 48:69–91 Broquet G, Moore AM, Arango HG, Edwards CA, Powell BS (2009b) Ocean state and surface forcing correction using the ROMS-IS4DVAR data assimilation system. Mercator Ocean Q Newsl 34:5–13 Broquet G, Moore AM, Arango HG, Edwards CA (2011) Corrections to ocean surface forcing in the California current system using 4D-variational data assimilation. Ocean Model 36:116–132 Chapman DC (1985) Numerical treatment of cross-shelf open boundaries in a barotropic coastal ocean model. J Phys Oceanogr 15:1060–1075 Courtier P (1997) Dual formulation of four-dimensional variational assimilation. Q J R Meteorol Soc 123:2449–2461 Courtier P, Thépaut J-N, Hollingsworth A (1994) A strategy for operational implementation of 4DVar using an incremental approach. Q J R Meteorol Soc 120:1367–1388 Da Silva A, Pfaendtner J, Guo J, Sienkiewicz M, Cohn S (1995) Assessing the effects of data selection with DAO’s physical-space statistical analysis system. Proceedings of the second international WMO symposium on assimilation of observations in meteorology and oceanography, WMO.TD 651, Tokyo, 13–17 March 1995, pp€273–278 Derber J, Bouttier F (1999) A reformulation of the background error covariance in the ECMWF global data assimilation system. Tellus 51A:195–221 Doyle JD, Jiang Q, Chao Y, Farrara J (2009) High-resolution atmospheric modeling over the Monterey Bay during AOSN II. Deep Sea Res Part II Top Stud Oceanogr 56:87–99 Egbert GD, Bennett AF, Foreman MCG (1994) TOPEX/POSEIDON tides estimated using a global inverse method. J Geophys Res 99:24821–24852 El Akkraoui A, Gauthier P (2009) Convergence properties of the primal and dual forms of variational data assimilation. Q J R Meoteorol Soc 136:107–115 Fairall CW, Bradley EF, Godfrey JS, Wick GA, Ebson JB, Young GS (1996a) Cool-skin and warm layer effects on the sea surface temperature. J Geophys Res 101:1295–1308 Fairall CW, Bradley EF, Rogers DP, Ebson JB, Young GS (1996b) Bulk parameterization of air-sea fluxes for tropical ocean global atmosphere coupled-ocean atmosphere response experiment. J Geophys Res 101:3747–3764 Fisher M, Courtier P (1995) Estimating the covariance matrices of analysis and forecast error in variational data assimilation. ECMWF Technical Memoranda 220 Flather RA (1976) A tidal model of the northwest European continental shelf. Memoires Soc R Sci Liege 6(10):141–164 Gill AE (1982) Atmosphere-ocean dynamics. Academic Press, San Diego Golub GH, Van Loan CF (1989) Matrix computations. Johns Hopkins University Press, Baltimore
14â•… Adjoint Data Assimilation Methods
379
Haidvogel DB, Arango HG, Hedstrom K, Beckmann A, Malanotte-Rizzoli P, Shchepetkin AF (2000) Model evaluation experiments in the north Atlantic basin: simulations in nonlinear terrain-following coordinates. Dyn Atmos Oceans 32:239–281 Hickey BM (1998) Coastal oceanography of western north America from the tip of Baja, California to Vancouver island. The Sea 11:345–393 Ide K, Courtier P, Ghil M, Lorenc AC (1997) Unified notation for data assimilation: operational, sequential and variational. J Meteorol Soc Jpn 75:181–189 Ingleby B, Huddleston M (2007) Quality control of ocean temperature and salinity profiles--historical and real-time data. J Mar Syst 65:158–175 Kelly KA, Beardsley RC, Limeburner R, Brink KH, Paduan JD, Chereskin TK (1998) Variability of the near-surface eddy kinetic energy in California current based on altimetric, drifter, and moored current data. J Geophys Res 103:13067–13083 Lanczos C (1961) Linear differential operators. Van Nostrand, New York Liu WT, Katsaros KB, Businger JA (1979) Bulk parameterization of the air-sea exchange of heat and water vapor including the molecular constraints at the interface. J Atmos Sci 36:1722–1735 Lorenc AC (2003) Modelling of error covariances by 4D-Var data assimilation. Q J R Meteorol Soc 129:3167–3182 Moore AM, Arango HG, Broquet G, Powell BS, Weaver AT, Zavala-Garay J (2011a) The regional ocean modeling system (ROMS) 4-dimensional variational data assimilation systems. Part I: formulation and system overview. Progress in Oceanography (submitted) Moore AM, Arango HG, Broquet G, Edwards C, Veneziani M, Powell B, Foley D, Doyle J, Costa D, Robinson P (2011b) The regional ocean modeling system (ROMS) 4-dimensional variational data assimilation systems. Part II: performance and application to the California current system. Progress in Oceanography (submitted) Moore AM, Arango HG, Broquet G, Edwards C, Veneziani M, Powell B, Foley D, Doyle J, Costa D, Robinson P (2011c) The regional ocean modeling system (ROMS) 4-dimensional variational data assimilation systems. Part III: observation impact and observation sensitivity in the California current system. Progress in Oceanography (submitted) Pedlosky J (1987) Geophysical fluid dynamics. Springer, New York Sasaki Y (1970) Some basic formulations in numerical variational analysis. Mon Weather Rev 98:875–883 Shchepetkin AF, McWilliams JC (2005) The regional oceanic modeling system (ROMS): a split explicit, free-surface, topography-following-coordinate oceanic model. Ocean Model 9:347–404 Stammer D, Wunsch C, Giering R, Eckert C, Heimbach P, Marotzke J, Adcroft A, Hill CN, Marshall J (2002) The global ocean circulation during 1992–1997 estimated from ocean observations and a general circulation model. J Geophys Res. doi:10.1029/2001JC000888 Tshimanga J, Gratton S, Weaver AT, Sartenaer A (2008) Limited-memory preconditioners with application to incremental variational data assimilation. Q J R Meteorol Soc 134:751–769 Veneziani M, Edwards CA, Doyle JD, Foley D (2009) A central California coastal ocean modeling study: 1. Forward model and the influence of realistic versus climatological forcing. J Geophys Res. doi:10.1029/2008JC004774 Weaver AT, Courtier P (2001) Correlation modelling on the sphere using a generalized diffusion equation. Q J R Meteorol Soc 127:1815–1846 Weaver AT, Vialard J, Anderson DLT (2003) Three- and four-dimensional variational assimilation with a general circulation model of the tropical Pacific Ocean. Part I: formulation, internal diagnostics and consistency checks. Mon Weather Rev 131:1360–1378 Weaver AT, Deltel C, Machu E, Ricci S, Daget N (2005) A multivariate balance operator for variational ocean data assimilation. Q J R Meteorol Soc 131:3605–3625 Wunsch C, Heimbach P (2007) Practical global ocean state estimation. Physica D 230:197–208
Chapter 15
Ensemble-Based Data Assimilation Methods An Overview of Recent Developments for Computationally Efficient Applications in Operational Oceanography Pierre Brasseur Abstract╇ Ensemble-based methods have become very popular for data assimilation in numerical models of oceanic or atmospheric flows. Unlike the deterministic Extended Kalman Filter which explicitly describes the evolution of the best estimate of the system state and the associated error covariance, ensemble filters rely on the stochastic integration of an ensemble of model trajectories that are intermittently updated according to data, using the forecast error covariance represented by the ensemble spread. In this chapter, we present an overview of recent developments of ensemble-based assimilation methods that were motivated by the need for cost-effective algorithms in operational oceanography. We finally discuss a number of standing issues related to temporal assimilation strategies.
15.1â•…Introduction Over the past 15 years, ensemble-based methods have become very popular for data assimilation in numerical models of geophysical flows and have matured to the point that observations are now operationally assimilated into ocean circulation models to produce a variety of ocean state estimations in real time and delayed mode (Cummings et€al. 2009). Among these methods, the Ensemble Kalman Filter (EnKF, Evensen 1994) is probably the most famous stochastic estimation algorithm which historically has been introduced in oceanography to overcome some of the problems encountered with the deterministic Kalman filter extended to non-linear models. The EnKF was further developed and applied to data assimilation into atmospheric models (e.g., Houtekamer and Mitchell 2001), hydrological models (e.g., Reichle et€al. 2002) and even geological reservoir applications (e.g., Chen and Zhang 2006). Considering the intrinsic properties shared by geophysical fluids such as chaotic temporal evolution and finite predictability (Brasseur et€al. 1996), ensemble-based P. Brasseur () CNRS, LEGI, Université de Grenoble, BP 53X, 38041 Grenoble, France e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_15, ©Â€Springer Science+Business Media B.V. 2011
381
382
P. Brasseur
methods are in essence well designed to address estimation problems with ocean or atmospheric systems involving non-linear mesoscale dynamics. For such systems, ensemble model integrations can be used to compute the probability distribution functions (pdfs) of predictions based on imperfect models and uncertain initial and/ or boundary conditions. The development of ensemble methods for data assimilation in oceanography has been motivated by several additional factors like: • the need to reduce the computational complexity of the conventional Kalman filter for applications to numerical problems involving very large dimensions; • the flexibility of computer implementations and operations with models based on numerical codes in perpetual evolution; • the possibility to conduct sensitivity experiments at very low cost using algorithmic simplifications, e.g. for testing parameterizations of the different assimilation steps; • requirements for error estimates on the solutions, which can be easily computed as a by-product of ensemble algorithms. Many review papers, textbooks (e.g. Evensen 2007) and application papers dealing with ensemble filters have been published since the seminal work by Evensen (1994). The fundamentals of the Kalman filter and the derivation of low-rank implementations have been presented in a previously published chapter book (Brasseur 2006) and will not be repeated here. In the present chapter, we present an overview of recent developments that were stimulated by the necessity for more cost-effective methods in the context of operational oceanography. We will exemplify the utility of explicit integration of the ensemble statistics, for instance to cope with non-Gaussian error distributions, smoother estimation solutions and assimilation of asynchronous observations. The basic concepts of ensemble filtering are briefly reminded in the next section. In Sect.€15.3, we present several approaches that were proposed in the literature to generate and propagate the error statistics with time. Different formulations of the observational update are then discussed in Sect.€15.4. Issues related to the temporal assimilation strategies are then addressed in Sect.€ 15.5. In the conclusion of the chapter, we finally discuss the implications of ensemble techniques for model development strategies in operational oceanography systems.
15.2â•…Ensemble Data Assimilation Methods Derived From the Kalman Filter The Kalman Filter (Kalman 1960) provides the basic framework for sequential assimilation methods based on the least squares estimation principle. The Kalman Filter (KF) is a statistical recursive algorithm designed for systems of linear dynamics, in which prior information (i.e., numerical model prediction) is merged with information from the actual system (i.e., observations) to produce a corrected, posterior
15â•… Ensemble-Based Data Assimilation Methods
383
Fig. 15.1↜渀 Conceptual filtering process in sequential assimilation. Three forecast-update cycles are shown, with data assimilated at i╛+╛1, i╛+╛2 and i╛+╛3. The vertical bars represent the model forecast and analysis errors (↜in red and blue) and the observation errors (↜in green)
system estimate. An extended version of the KF has been developed for non-linear models, known as the Extended Kalman Filter (EKF, Jazwinski 1970). The implementation of the KF follows a sequence of forecast-update cycles that involve two main steps: the forecast step for transitioning the model state and the associated error covariance between two successive times i and iâ•›+â•›1, and the observational update for correcting the forecast using observations available at time iâ•›+â•›1 (Fig.€15.1). A somewhat heuristic derivation of the KF and EKF equations is presented by Brasseur (2006). During the forecast step, the uncertainty of the system estimate is expected to grow due to initial errors and imperfect model dynamics, while the uncertainty is reduced whenever measurements are assimilated. As only data from the past influence the best estimate at a given time, the assimilation process belongs to the class of filtering methods. In spite of its conceptual simplicity, the applicability of the EKF to non-linear ocean circulation models is often impossible even for problems of modest size. One of the main issues is related to the explicit computation of the forecast error covariance in the baseline algorithm, which can only be achieved at the price of n model integrations (where n designates the dimension of the state vector of the discretized system). Since n is typically ~107–109 in operational applications, a brute-force implementation is just impossible and alternative formulations must be sought. In his initial work on the Ensemble Kalman Filter, Evensen (1994) showed that Monte Carlo methods could be used as an alternative to the approximate error covariance evolution equation used in the EKF to compute forecast error estimates with a significantly lower computational cost. Unlike the deterministic EKF which explicitly describes the evolution of the best estimate of the system state and the associated error covariance, the EnKF relies on the stochastic integration of an ensemble of model states followed by observational updates using the forecast error covariance implicitly represented by the ensemble spread. The size of the ensemble (noted m here after),—and thus the CPU requirements to run the EnKF—, depends on the actual shape of the probability distributions that need to be sampled, but the
384
P. Brasseur
literature suggests that an ensemble of size 50–100 is often adequate for real ocean systems. The accuracy of the state estimates as a function of ensemble size remains however an important research question that will be discussed further in the following sections.
15.3â•…Ensemble Generation and Forecast Several categories of ensemble-based assimilation techniques can be identified, which essentially differ on the strategy adopted to generate the initial ensemble members describing the uncertainty of the estimated state, and to propagate this uncertainty over the assimilation time window. In the native formulation of the EnKF, the ensemble members are generated as purely random samples of the prior probability distribution of the system state, while in other schemes such as the Singular Evolutive Extended Kalman (SEEK) filter introduced by Pham et€al. (1998) the uncertainty is described in terms of “well chosen” perturbations of a given reference trajectory. The basic principle of the SEEK filter is to make corrections only in the directions for which the error is amplified or not sufficiently attenuated by the system dynamics. Instead of ensembles model realizations, the SEEK filter is thus considering perturbation ensembles that span and track the scales and processes where the dominant errors occur. This motivates the representation of uncertainty using Empirical Orthogonal Functions (EOFs) of the system variability (which are often approximated by EOFs of the model variability) to characterize and predict the largest uncertainties. As is illustrated in detail by Nerger et€al. (2005), an EOF-based approach generally requires a smaller ensemble size for the same performance compared to strategies based on purely random sampling. Based on this work and the reformulated SEEK filter by Pham (2001), a revised sampling strategy for the EnKF was proposed by Evensen (2004). The SEEK approach is closely related to the concept of Error Sub-space Statistical Estimation (ESSE) introduced by Lermusiaux and Robinson (1999). In applications of ESSE methodologies, the error modes are obtained by a singular value decomposition of the error covariance matrix which can be specified by means of analytical functions. Other methods have been proposed that utilize singular, Lyapunov or breeding vectors of the transition matrix (e.g., Miller and Erhet 2002; Hamill et€ al. 2003). The leading Lyanunov vectors are computed by applying the linear tangent model on perturbations of the non-linear model trajectory, whereas the bred vectors are a generalization of the Lyapunov vectors computed with the non-linear model. Figure€ 15.2 illustrates schematically the convergence of an ensemble of initial perturbations toward the leading Lyapunov vectors. A common denominator between the EnKF, the SEEK, the ESSE and similar methods is the rank-deficient property of the error covariance matrix associated to the ensemble spread since, in practice, the size of the ensemble is much smaller that
15â•… Ensemble-Based Data Assimilation Methods
385
Fig. 15.2↜渀 Schematic representation of the evolution of an ensemble of initially random perturbations converging toward leading Lyapunov vectors. (Redrawn from Kalnay 2003)
the dimension of the system space. The concept of reduced rank was first introduced in the KF framework by Todling and Cohn (1994). The reformulation of the analysis and forecast equations of the KF in presence of rank-deficient error covariance matrices is described in Brasseur (2006). In the EKF, the time evolution of the error statistics over the assimilation window is computed using the tangent linear model to update the error covariance matrix. The same approach was proposed in the native formulation of the SEEK filter (Pham et€al. 1998). In order to better capture non-linear evolutions of the error field, a finite difference solution of the forecast error equation can be substituted as proposed by Brasseur et€al. (1999), avoiding in this way the development and implementation of a tangent linear operator in the filter. A similar strategy is used to evolve the subspace in the ESSE methodology (Lermusiaux 2001) as well as in the SEIK variant of the SEEK filter (Pham 2001): in these schemes, a central forecast and an ensemble of stochastic ocean model integrations are carried out starting from perturbed states. This technique is eventually very close to the EnKF in which the integration of all ensemble members is performed using the nonlinear model without any obligation to identify a central forecast. The computational resources needed to evolve the error statistics with these methods is proportional to the rank r of the error subspace, or alternatively to the size m of the ensemble. In either case, this requires at least an equivalent of several tens of model integrations which are not always affordable in operational ocean systems. It is therefore natural to fall back on simplified versions of the assimilation schemes where the error statistics is not explicitly evolved using the model dynamics: examples are the Ensemble Optimal Interpolation (EnOI) presented in Evensen (2003), or the SEEK filter with fixed error modes (Brasseur and Verron 2006). An operational implementation of EnOI in the BlueLINK operational forecasting system is described by Oke et€al. (2008), whereas in the Mercator system the assimilation scheme is a SEEK filter with stationary EOF basis (Brasseur et€al.€2005).
386
P. Brasseur
These€methods still allow the computation of multivariate analyses and are numerically extremely efficient, but a larger ensemble may be required to ensure that it spans a large enough space to properly capture the relevant analysis increments. Further, the sub-optimal analysis solutions are provided without consistent error estimates. We will identify in the next sections additional disadvantages of sequential assimilation methods based on stationary error statistics.
15.4â•…Observational Update In this section, an overview is given of different approaches to perform the observational update,—or analysis step—, of the system’s state. The Kalman filter updates the prediction with new measurements using a weighted combination of the model forecast and measurement values. The computation of the weights relies on an optimality principle that involves the covariance matrix of the forecast and observation errors. In the so-called “stochastic” implementations of the EnKF, the repetitive update of all forecast members is achieved using perturbed observations to avoid the problem of systematic underestimation of the analysis covariance that occurs when the same data and the same gain are used in the ensemble of analysis equations (Burgers et€al. 1998). The perturbations are drawn from a distribution with zero mean and covariance equal to the measurement error covariance matrix. An alternative technique is the so-called “deterministic” or “square root” analysis scheme, which consists of a single analysis based on the ensemble mean, and where the update of the perturbations is obtained from the square root of the Kalman filter analysis error covariance (Verlaan and Heeming 1997; Tippett et€al. 2003). When the error covariance matrices are in square root form, the inverse operations required to compute the analysis increments are performed in the reduced space rather than in the observation space. The Kalman filter analysis scheme is thus transformed to become linear in the number of observations y (instead of being originally proportional to the cube of y), upon the condition of availability of the inverse of the observation error covariance matrix. This condition is a severe limitation to the use of square root algorithms in operational settings, which often leads to assuming uncorrelated observation errors for the sake of numerical efficiency. In a recent paper, Brankart et€al. (2009) show that the linearity of the square root algorithm in y can be preserved for a very broad class of non-diagonal observation error covariance matrices. This can be achieved by augmenting the observation vector with discrete measurement gradients. The proposed technique is shown beneficial with regard to the quality of observational updates and accuracy of the associated error estimates (Fig.€15.3). It can also be combined with adaptive techniques (e.g., Brankart et€al. 2010a) for tuning key parameters of the prescribed error statistics. Based on these results, the use of a diagonal observation error in the square root formulation of the analysis step should no more be an obstacle for operational implementations with huge observation sets to assimilate.
15â•… Ensemble-Based Data Assimilation Methods
387
Fig. 15.3↜渀 Results of observational updates performed in a 1/4° NEMO simulation of the North Brazil Current system. The first line shows a snapshot of the circulation on December 14th: seasurface height (SSH) in meter (↜left column), the SSH gradient in meter per grid point (↜middle), and the sea-surface velocity in m€s−1 (↜right column). The error standard deviation as measured by the ensemble of difference with respect to the true state is shown in the second line, while the same quantities as estimated by the square-root scheme with a parameterization of correlated observation errors are shown in the third line. The bottom line shows the results obtained when the observation error is parameterized as a diagonal matrix (i.e. neglecting correlated observation errors), which significantly differ from the previous error estimates. (Redrawn from Brankart et€al. (2009))
In typical operational forecasting applications, the dimension of the reduced space (e.g., implied by ensemble size m) is much smaller than the dimension of the state vector n and also smaller than the number of positive Lyapunov exponents of the system. Hence, all unstable modes of the dynamical system are not controllable and this may lead to unreliable forecasts. A related issue is the lack of accurate representation of weakly correlated variables at long distance when using ensembles with only ~100 members. Different localization techniques have been introduced to overcome this problem, such as the application of a Schur-product to modify the ensemble covariance using local support correlation functions (e.g., Houtekamer and Mitchell 2001), the computation of local analysis at a grid point using only nearby measurements (Evensen 2003), or the approximate local error parameterization pro-
388
P. Brasseur
Fig. 15.4↜渀 Representers for one observation (identified by the blue dot) of sea-surface height in a relatively quite region of an idealised mid-latitude, double-gyre model: (↜top left panel) using a 5000-member ensemble covariance without localization, (↜top central panel) using a 200-member ensemble covariance without localization, and (↜top right panel) using a 20-member ensemble covariance with localization. The bottom panels show the corresponding error standard deviation estimated by the square root filter (from Brankart et€al. 2010b)
posed by Brankart et€al. (2010b) which preserves the computational complexity of square root algorithms. The localization process can be interpreted as a means of increasing the rank of the error covariance matrix without increasing the number of members in the ensemble (i.e. the cost of the forecast error computation). In the example of Fig.€15.4, it is shown that the local parameterization proposed by Brankart et€al. (2010b) can efficiently remove the spurious covariances associated with remote observations and improve the accuracy of the analysis error estimated by the filter. This topics is still subject to active research for improving the efficiency of ensemble-based assimilation techniques in realistic oceanographic applications. Until now, the default assumption of KF-based algorithms has been to assume that the background error pdfs are normal distributions. This is a convenient choice because normal pdfs are fully determined by only two parameters (the mean and the standard deviation), and remain Gaussian after linear operations. In addition, the least squares solution obtained by the linear update (as discussed above) corresponds to the maximum likelihood if the errors have a normal distribution. In many applications however, the Gaussian assumption is a very crude approximation of the actual error distributions and a more general framework compliant with the concept of non-linear analysis is required. A simple example is the estimation of tracer concentrations which are positive-definite quantities and therefore cannot be treated as Gaussian variables (as positive-definiteness is not preserved in a linear analysis scheme).
15â•… Ensemble-Based Data Assimilation Methods
389
An adaptation of the EnKF to account for non-Gaussian errors was first proposed by Bertino et€al. (2003) who introduced the concept of anamorphosis to transform the set of original state variables in the physical space into modified variables that are hopefully more suitable for linear updates. This concept has been further explored and applied to assimilate synthetic data in a coupled physical-ecosystem model of the Arctic Ocean (Simon and Bertino 2009). An even more general transformation method has been proposed recently by Béal et€al. (2010), still in the context of data assimilation into coupled physical-biogeochemical models. The underlying idea is to take full benefit of the ensemble forecast statistics and compute transformation functions locally by mapping the ensemble percentiles of the distributions of each state variable on the Gaussian percentiles. The results of idealized experiments indicate that this anamorphosis method can significantly improve the estimation accuracy with respect to classical computations based on the Gaussian, opening new prospects e.g. for assimilation into coupled models (physics-biology or ocean-iceatmosphere). A key aspect of the anamorphosis approach is that it doesn’t induce any significant extra cost when compared to the linear analysis scheme. However, the full benefit of adaptive anamorphosis as proposed by Béal et€al. (2010) is obtained when the error statistics is explicitly propagated using the model dynamics.
15.5â•…Temporal Strategies In the conceptual assimilation problem described by Fig.€15.1, two major simplifications have been considered: (i) the observations are available at discrete time intervals, and (ii) the analysis is performed at the exact time of the measurements. In real-world oceanographic and atmospheric problems, the situation is quite different since the flow of observations can be considered as almost continuous in time (as, for instance, the sampling of along-track altimeter data). It would not be appropriate to interrupt the model forecast every time a new piece of data becomes available because very frequent model updates based on too few data would be too expensive and detrimental to the numerical time integration of the model. In practice, assimilation windows in operational systems are 3–7 days for mesoscale ocean current predictions, and 10–30 days for initialization of coupled ocean-atmosphere seasonal prediction systems. Hence, intermittent assimilation methods necessarily involve approximations. For example, the FGAT (First Guess at Appropriate Time) method initially introduced in meteorology can be used to evaluate the innovation vector more correctly: instead of computing the difference between the time-distributed data set and the model forecast at the analysis time, the innovation is evaluated “on the flight” by cumulating the differences between each piece of observation and the corresponding element of the model forecast at the measurement time. This approach has been tested with 3D-VAR assimilation systems (Weaver et€al. 2003). A rigorous way for taking into account the temporal distribution of data is offered by “4D” assimilation methods. 4D-VAR or ensemble methods have indeed
390
P. Brasseur
the capacity to assimilate asynoptic data at their exact observation time, within an assimilation window. In the Ensemble Kalman Smoother (EnKS) introduced by Evensen and van Leeuwen (2000), it is possible to assimilate non-synoptic measurements by exploiting the time correlations in the ensemble: the EnKF solution is used as the first guess for the analysis, which is propagated backward in time by using the ensemble covariances. This so-called 4D-EnKF formulation was further discussed by Hunt et€al. (2004), and more recently revisited by Sakov et€al. (2010). In line with these works, Cosme et€al. (2010) have developed a reduced rank, squareroot smoother derived from the SEEK formulation. The CPU requirements for the EnKF, the EnKS, the SEEK filter or the SEEK smoother are similar when mâ•›=â•›r. Compared to 4D-VAR however, no backward integrations in time and no adjoint operators are needed. The storage requirements of smoothers however may become huge for long time intervals with many analysis times since the ensemble trajectory has to be stored at all observation instants. A second consequence of intermittency is the discontinuity of the forecast/analysis estimates, which is recognized as a major drawback of both variational and sequential assimilation methods that require repeated assimilation cycles. Two related problems,—shocks to the model and data rejection—, arise with intermittent corrections. It is found that observations assimilated into models may introduce transient waves excited by the impulsive insertion. These waves are often the result of imperfections in the corrected state associated to physically unbalanced error covariances. In order to incorporate analysis increments in a more gradual manner, an algorithm based on Incremental Analysis Updates (IAU) was proposed by Bloom et€al. (1996), which combines aspects of intermittent and continuous assimilation schemes. Using the classical KF equations, the IAU algorithm first computes the analysis correction; this correction is then distributed (uniformly or not) over the assimilation window and inserted gradually to the model evolution (Ourmières et€al. 2006). The state obtained at the end of the assimilation window can be used as initial conditions for the next assimilation cycle, leading to time-continuous filtered trajectories. The IAU temporal strategy can be complemented by the FGAT scheme which computes the innovation “on the flight”. More rigorous techniques that combine localization and processing of observations that arrive continuously in time are subject to new developments compliant with large numerical systems (e.g., Bergemann and Reich 2010).
15.6â•…Conclusions Outstanding advances have been accomplished since the first applications of ensemble-based methods to assimilate data into oceanic or atmospheric models. A broad variety of reduced-rank Kalman filters exist today, that were developed with the aim of reducing the computational complexity of the native algorithms while making possible the assimilation of complex and heterogeneous data sets into nonlinear models.
15â•… Ensemble-Based Data Assimilation Methods
391
In this chapter, we have shown that ensemble-based methods are becoming very competitive with respect to 4D-VAR while their flexibility for implementation into numerical codes that are in perpetual evolution remains a major asset. In addition, ensemble methods provide an elegant and powerful statistical methodology to quantify uncertainty as requested by users of operational oceanography products. However, most operational systems in place today are still based on sub-optimal estimation methods (e.g. EnOI) that do not explicitly propagate the error statistics. The transition toward 4D methods is challenging in a context where the increase of operational model resolution in space and time is strongly encouraged by user requirements, scientific arguments (e.g. role of submesoscale processes) and the refined resolution of data sets available today. For applications such as the production of multi-decadal reanalyses, the requirement for dynamical consistency over periods of time larger than the predictability time scales of the simulated flow remains an issue at conceptual level. This is especially true with eddy-resolving ocean models for which the 4D “weak-constraint” assimilation methods (i.e. assuming that the model equations are not strictly verified) represent the relevant framework to properly conciliate imperfect models with imperfect data. Acknowledgements╇ I thank the organizers of this GODAE Summer School on Operational Oceanography held in Perth, Australia, for inviting me to give these lectures in a so wonderful place. This work has been partly supported by the MyOcean project of the European Commission under Grant Agreement FP7-SPACE-2007-1-CT-218812-MYOCEAN.
References Béal D, Brasseur P, Brankart J-M, Ourmières Y, Verron J (2010) Characterization of mixing errors in a coupled physical biogeochemical model of the North Atlantic: implications for nonlinear estimation using Gaussian anamorphosis. Ocean Sci 6:247–262 Bergemann K, Reich S (2010) A localization technique for ensemble Kalman filters. Quart J R Meteor Soc 136:701–707 Bertino L, Evensen G, Wackernagel H (2003) Sequential data assimilation techniques in oceanography. Int Stat Rev 71:223–241 Bloom SC, Takacs LL, Da Silva AM, Ledvina D (1996) Data assimilation using incremental analysis updates. Mon Wea Rev 124:1256–1271 Brankart J-M, Ubelmann C, Testut C-E, Cosme E, Brasseur P, Verron J (2009) Efficient parameterization of the observation error covariance matrix for square root or ensemble Kalman filters: application to ocean altimetry. Mon Wea Rev 137:1908–1927. doi:10.1175/2008MWR2693.1 Brankart J-M, Cosme E, Testut C-E, Brasseur P, Verron J (2010a) Efficient adaptive error parameterizations for square root or ensemble Kalman filters: application to the control of ocean mesoscale signals. Mon Wea Rev 138:932–950. doi:10.1175/2009MWR3085.1 Brankart J-M, Cosme E, Testut C-E, Brasseur P, Verron J (2010b) Efficient local error parameterizations for square root or ensemble Kalman filters: application to a basin-scale ocean turbulent flow. Mon Wea Rev (in revision) Brasseur P (2006) Ocean Data Assimilation using Sequential Methods based on the Kalman Filter. In: Chassignet E, Verron J (eds) Ocean weather forecasting: an Integrated View of Oceanography. Springer, Netherlands, pp€271–316
392
P. Brasseur
Brasseur P, Verron J (2006) The SEEK filter method for data assimilation in oceanography: a synthesis. Ocean Dyn 56:650–661. doi:10.1007/s10236-006-0080-3 Brasseur P, Blayo E, Verron J (1996) Predictability experiments in the North Atlantic Ocean: outcome of a QG model with assimilation of TOPEX/Poseidon altimeter data. J Geophys Res 101(C6):14161–14174 Brasseur P, Ballabrera J, Verron J (1999) Assimilation of altimetric observations in a primitive equation model of the Gulf Stream using a singular evolutive extended Kalman filter. J Mar Syst 22(4):269–294 Brasseur P, Bahurel P, Bertino L, Birol F, Brankart J-M, Ferry N, Losa S, Rémy E, Schröter J, Skachko S, Testut C-E, Tranchant B, van Leeuwen PJ, Verron J (2005) Data Assimilation for marine monitoring and prediction: the MERCATOR operational assimilation systems and the MERSEA developments Quart J R Meteor Soc 131:3561–3582 Burgers G, van Leeuwen P, Evensen G (1998) Analysis scheme in the ensemble Kalman filter. Mon Wea Rev 126:1719–1724 Chen Y, Zhang D (2006) Data assimilation for transient flow in geologic formations via ensemble Kalman filter. Adv Water Resour 29(8):1107–1122 Cosme E, Brankart J-M, Verron J, Brasseur P, Krysta M (2010) Implementation of a reduced-rank, square root smoother for high-resolution ocean data assimilation. Ocean Model 33:87–100. doi:10.1016/j.ocemod.2009.12.004 Cummings J, Bertino L, Brasseur P, Fukumori I, Kamachi M, Martin M, Morgensen K, Oke P, Testut CE, Verron J, Weaver A (2009) Ocean data assimilation systems for GODEA. Oceanography 22(3):96–109 Evensen G (1994) Sequential data assimilation with a nonlinear quasi-geostrophic model using Monte Carlo methods to forecast error statistics. J Geophys Res 99(C5):10143–10162 Evensen G (2003) The Ensemble Kalman Filter: theoretical formulation and practical implementation. Ocean Dyn 53:343–367 Evensen G (2004) Sampling strategies and square root analysis schemes for the EnKF. Ocean Dyn 54:539–560 Evensen G (2007) Data assimilation, the ensemble Kalman filter. Springer, New York, p€279 Evensen G, van Leeuwen PJ (2000) An ensemble Kalman smoother for non-linear dynamics. Mon Wea Rev 128:1852–1867 Hamill TM, Snyder C, Whitaker JS (2003) Ensemble forecasts and the properties of flow-dependent analysis-error covariance singular vectors. Mon Wea Rev 131:1741–1758 Houtekamer PL, Mitchell HL (2001) A sequential ensemble Kalman filter for atmospheric data assimilation. Mon Wea Rev 129:123–137 Hunt B, Kalnay E, Kostelich E, Ott E, Patil DJ, Sauer T, Szunyogh I, Yorke JA, Zimin AV (2004) Four dimensional ensemble Kalman filtering. Tellus 56A:273–277 Jazwinski AH (1970) Stochastic processes and filtering theory. Academic Press, San Diego Kalman RE (1960) A new approach to linear filter and prediction problems. J Basic Eng 82:35–45 Kalnay E (2003) Atmospheric modeling, data assimilation and predictability. Cambridge University Press, Cambridge, p€341 Lermusiaux PFJ (2001) Evolving the subspace of the three dimensional ocean variability: Massachusetts Bay. J Mar Syst 29:385–422 Lermusiaux PFJ, Robinson AR (1999) Data assimilation via error subspace statistical estimation, Part I: theory and schemes. Mon Wea Rev 127(7):1385–1407 Miller RN, Ehret L (2002) Ensemble generation for models of multimodal systems. Mon Wea Rev 130:2313–2333 Nerger L, Hiller W, Schröter J (2005) A comparison of error subspace Kalman filter. Tellus 57A:715–735 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink ocean data assimilation system (BODAS). Ocean Model 21:46–70 Ourmières Y, Brankart JM, Berline L, Brasseur P, Verron J (2006) Incremental analysis update implementation into a sequential ocean data assimilation system. J Atmos Ocean Technol 23(12):1729–1744
15â•… Ensemble-Based Data Assimilation Methods
393
Pham DT (2001) Stochastic methods for sequential data assimilation in strongly non-linear systems. Mon Wea Rev 129:1194–1207 Pham DT, Verron J, Roubaud MC (1998) A singular evolutive extended Kalman filter for data assimilation in oceanography. J Mar Syst 16:323–340 Reichle RH, McLaughlin DB, Entekhabi D (2002) Hydrologic data assimilation with the Ensemble Kalman filter. Mon Wea Rev 130:103–114 Sakov P, Evensen G, Bertino L (2010) Asynchronous data assimilation with the EnKF. Tellus 66A:24–29 Simon E, Bertino L (2009) Application of the Gaussian anamorphosis to assimilation in a 3-D coupled physical-ecosystem model of the North Atlantic with the EnKF: a twin experiment. Ocean Sci 5:495–510 Tippett MK, Anderson JL, Bishop CH, Hamill TM, Whitaker JS (2003) Ensemble square root filters. Mon Wea Rev 131:1485–1490 Todling R, Cohn SE (1994) Suboptimal schemes for atmospheric data assimilation based on the Kalman Filter. Mon Wea Rev 122:2530–2557 Verlaan M, Heemink AW (1997) Tidal flow forecasting using reduced-rank square root filter. Stoch Hydrol Hydraul 11:349–368 Weaver A, Vialard J, Anderson DLT (2003) Three- and four-dimensional variational assimilation with a general circulation model of the tropical Pacific Ocean. Part I: formulation, internal diagnostics, and consistency checks. Mon Wea Rev 131:1360–1378
Part VI
Systems
Chapter 16
Overview Global Operational Oceanography Systems Eric Dombrowsky
Abstract╇ Several systems to compute routinely ocean forecasts have been developed within the Global Ocean Data Assimilation Experiment (GODAE). They are used to deliver operational services. The cornerstones of these systems are: (1) input high quality observations obtained from space and in situ, available shortly after the measurement is done, (2) state-of-the-art realistic numerical model configurations, and (3) efficient assimilation system that combine the observations and the model physics. To be operated routinely, these components have to be integrated into a whole system that will enable routine operations, and service delivery in real-time. We present here these functions, highlighting their specific characteristics.
16.1â•…Introduction Operational Oceanography (hereafter called OO) is a concept that has been largely pushed forward by the GODAE (Smith and Lefebvre 1997). Operational is a term widely used in different communities, but the meaning of it, and the understanding of its meaning vary significantly among the communities. This is why a precise definition of it has been given by GODAE to avoid confusion and diverse interpretation. Following the GODAE Stategic Plan (IGST 2000), Operational is “whenever the processing is done in a routine and regular way, with a pre-determined systematic approach and constant monitoring of performance”. Following this definition, two important ingredients are needed: (1) the production has to be regular and systematic, and the service schedule has to be pre-determined so that the users know precisely which services they will get, when and how, and (2) the performances, either scientific (product quality) or technical are constantly monitored to ensure the quality of the service.
E. Dombrowsky () Mercator Océan, Parc Technologique du Canal, 8–10 Rue Hermes, 31520 Ramonville Saint Agne, France e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_16, ©Â€Springer Science+Business Media B.V. 2011
397
398
E. Dombrowsky
Fig. 16.1↜渀 The three pillars of operational oceanography real-time systems: (1) space observations, (2) in situ measurements, and (3) ocean circulation models coupled with data assimilation systems to provide ocean forecast services
The real-time OO systems are based on 3 pillars as illustrated in Fig.€16.1: (1) remote sensing observing systems, (2) in situ observing system, and (3) assimilative ocean general circulation models (OGCMs) to combine these observation and issue a forecast from which services are delivered to users. This paper will concentrate on the systems developed to deliver real-time OO services including forecast for the ocean physics only. Biogeochemical services and reanalysis will not be considered in this paper. After this brief introduction, Sect.€ 16.2 presents a brief overview of the GODAE operational oceanography systems. Section€16.3 presents the key functional elements of operational systems which transform input data into services delivered to the users, through a processing chain. This list of key elements and their characteristics came out of the analysis of the existing systems developed within GODAE. Section€16.4 then presents and discusses some important non functional aspects of OO systems.
16.2╅Overview of GODAE OceanView Operational Oceanography Systems Several OO systems have been developed in the last decade. A detailed presentation of various aspects of these systems can be found in the papers published in the special issue of oceanography magazine, Vol.22-N0.3-Sept 2009, dedicated to GODAE. In particular, Dombrowsky et€al. 2009 gives an overview of the GODAE realtime systems status as of 2008. Table€16.1 below gives an update as of March 2010
16â•… Overview Global Operational Oceanography Systems
399
Table 16.1↜渀 The main characteristics of the 13 GODAE real-time systems as of January 2010: BLUElink> (Australia), C-NOOFS (Canada), ECCO-JPL (US), FOAM (UK), HYCOM/NCODA (US), MERCATOR (France), MFS (Italy), MOVE/MRI (Japan), NCOM and NLOM (US), NMEFC (China), RTOFS (US) and TOPAZ (Norway). First column gives the name of these systems (alphabetic order), second column the name of the OGCM used, column 3 the coverage, either global and/or regional, column 4 the horizontal resolution of the OGCM configurations used, column 5 the main characteristics of the vertical resolution (type, number of levels/layers), and last column the type of data assimilation system used. We have enhanced in red the systems that have global coverage and in blue the systems that have eddy resolving capabilities (threshold 1/10°). One can denote that we have now 3 systems providing eddy resolving forecast services for the global ocean OGCM Domains Horizontal Vertical Assimilation 1°G 47 z-levels Ensemble OI Global BLUElink> MOM 4 1/10°R Regional C-NOOFS NEMO Can. Atl. 1/4° 50 z-levels None 46 z-levels Kalman ECCO-JPL MIT OGCM Global 1°â•›×â•›0.3° Filterâ•›+â•›Smoother FOAM NEMO 1/4°G 50 z-levels Analysis correction Global 1/12 R Regional HYCOM 32 Hybrid Multivariate OI HYCOM/ Global 1/12° NCODA MERCATOR NEMO Global 1/4°â•›+â•›1/12°G 50 z-levels SEEK Filter 1/12°R Regional MFS NEMO Med. 1/16° 71 z-levels 3D-VAR Move/MRI MRI.com 1°G Global 50 z-levels 3D-VAR 1/2°â•›+â•›1/10°R Regional 2D-OIâ•›+â•›3D-VAR NCOM POM 1/8° 42 hybrid Global NLOM NLOM 6â•›+â•›1 layers 2D-OI Global 1/32° NMEFC Lap/Cas Tr. Pac. 2°â•›×â•›1° 14 z-levels 3D-VAR HYCOM N. Atl. 4–18€km RTOFS 26 hybrid 3D-VAR TOPAZ HYCOM N. Atl.â•›+â•›Arctic 11–16€km 22 hybrid EnKF 100 members
of the main characteristics of the systems presented in this paper and whose development continues within the initiative following on the work done by the International GODAE Steering Team (IGST): GODAE OceanView (Le Traon et€al. 2010). We see in this table that there are 13 systems operated by the OceanView partners providing forecast services in real-time, in several countries around the world: • In the USA, there are several systems operated by several agencies: − ECCO operated at JPL, − HYCOM/NCODA, NCOM and NLOM systems operated by the US Navy at NAVOCEANO, − RTOFS system operated at NOAA, • In Europe with several systems operated in different countries: − − − −
FOAM operated by the Metoffice in UK MERCATOR operated by Mercator Océan in France MFS operated by INGV in Italy TOPAZ operated by NERSC and Met.no in Norway
400
E. Dombrowsky
• In Australia with BLUElinkâ•›>â•›operated by the Bureau of Meteorology • In Canada with C-NOOFs operated by Fisheries and Oceans Canada (DFO) • In China with the NMEFC system operated by the National Marine Environmental Forecasting Center (NMEFC) • In Japan with MOVE/MRI operated by the Japanese Met. Agency (JMA). The reader can refer to Dombrowsky et€al. 2009 paper to find more details and reference to the modeling and assimilation tools used. Only a few systems have developed global eddy resolving capacity, such as the US-Navy and Mercator Océan who operate the eddy resolving global systems. This is mainly due to computer resources limitation (discussed below). In order to reach the eddy resolving horizontal resolution, several centres have developed and operate regional high resolution configurations in their region of interest, which are embedded in larger scale configurations. For example, MOVE/MRI has implemented a full suite of systems downscaling from a global configuration with 1° horizontal resolution to a ½° North Pacific, then to a 1/10° in their local region of interest, i.e. the Kuroshio region. In the USA, NOAA and the Navy are now concentrating on the development of common tools based on HYCOM configurations, which will progressively replace the existing systems (e.g. NLOM and NCOM). In Europe, within GMES (Global Monitoring for Environment and Security European initiative, see Ryder 2007), the OceanView partners (associated with other non-OceanView partners), are currently developing an integrated capacity to provide marine core services to users. These developments occur within the MyOcean (Bahurel et€al 2010) 3-year project which is funded by the European Commission. MyOcean was kicked off early in 2009 and aims at building on the existing assets, including those of GODAE, the bases of such marine core services in Europe. In addition to this existing capacity, other countries are already operating, or have planned projects to implement such systems, among which are Brazil and India. They are not listed here because even if they may join OceanView soon, they are not yet part of it. One of the goals of OceanView is to include these emerging operational oceanography players as soon as their case is made. The minimum requirements to be part of OceanView are (1) a multi-year national support to operational oceanography development within the country with a strong scientific base, and (2) willingness to join OceanView.
16.3â•…The Key Functions of OO Systems 16.3.1 Observations, Model and Data Assimilation 16.3.1.1â•…Quality Input Data in Near Real-Time To run OO systems, one needs to have input observations and atmospheric forcing whose characteristics are: high quality and high availability in real-time.
16â•… Overview Global Operational Oceanography Systems
401
This is one of the most outstanding challenges of OO, because without these inputs, it is impossible to expect any performances (such as predictive skill) of OO systems. Ocean observations are used for mainly two purposes: • for their assimilation, to maintain the system simulated ocean state as close as possible to the real ocean state as it is observed, • for validation of the products. The observations considered in the existing OO systems are: • remote sensing: e.g. surface elevation, surface temperature, ocean color, ice concentration and drift. • in situ: e.g. profiles of temperature and salinity from ARGO profilers, XBTs, CTDs, drifting buoys, moored stations. For these observations, there is a need to get data that are quality controlled (QC), processed as soon as possible after the measurement is done with state-of-the-art algorithms to have a quality comparable to the one that could be obtained in delayed mode, and disseminated without delay to the OO centres. Dedicated processing centres have been setup to handle these observations during GODAE for most of the observations listed above. This is the result of strong international organisation and effort that need to be pursued on the long run. The data used are generally point measurements. Some systems use also gridded products. This is the case for instance when the assimilation system used is not advanced enough to handle point measurements. In that case, off-line gridding procedures are considered useful. However, the goal is to develop observation operators that allow handling data that are as close as possible to the measurements, in order to take into account most of the space/time information contained in the observations. Observations are sometimes bad, for example when there is a problem with the sensor itself, or the real-time processing. In that case, the data must be rejected by the QC procedure, because any bad data entering into the system can create, if not rejected, spurious features in the analysed fields, and the recovery after such bad data have been incorporated may last for a long before its influence vanishes, an may eventually never be fully achieved. On the other hand, as many good data as possible have to enter into the system, because the observations are crucial for system performance, and they are too sparse (the ocean and its variability are largely under sampled with the existing observing systems). Any loss of data is detrimental for the service quality (almost no redundancy in the observation systems). This is one of the major challenges OO has to face: reject all bad data, but reject as few as possible good data. This is the role of input data QC, as described in Cummings 2010 (reference in this book). While some GODAE centres have developed their own capacity for observation acquisition and processing, such as the Altimeter Data Fusion Center (ADFC) for the altimeter data for US-navy systems, some others are relying on dedicated centres that deal specifically with these observation issues, such as the Physical Oceanography Distributed Active Archive Center (PO.DAAC) or the Data Unifica-
402
E. Dombrowsky
tion and Altimeter Combination System (DUACS) for the altimeters, CORIOLIS for the in situ observations, or the Global High Resolution Sea Surface Temperature (GHRSST) for the dissemination of sea surface temperature observations. As far as real-time flow of observations to OO centres is concerned, there is still room for improvement reducing further the delay between measurement time and availability of observations for their assimilation, which ranges today from several hours to a few days, depending on the observation types. 16.3.1.2â•…Forcing Fields In addition to observations, atmospheric forcing fields are required for the near past, present and future (forecast). They come from outputs of Numerical Weather Prediction (NWP) services, as atmospheric variables in the upper atmosphere that are used to interactively compute (BULK formulae) the heat, momentum and/or freshwater fluxes, or directly as estimates of these fluxes. In the latter case, theses estimations made by NWP services takes into account generally a steady ocean which differs from the real one, and from the one simulated by the OO system. This may create spurious effects such as accumulation of biases in the upper ocean (no retro-action of the ocean on the atmosphere for uncoupled systems). For the near past, remote sensing data can be also used to compute the forcing terms, among which are the wind and surface temperature satellite observations. For more details about the input data, see Ravichandran 2010; Le Traon 2010 and Josey 2010 (references in this book) There are several options to use the atmospheric forcing among which are: • include or not the high frequencies (analytical daily cycle, use of hourly, 3-hourly, 6-hourly or daily fields), • merge forcing with observations for the hindcast, • extend the ocean forecast range beyond the Atmospheric forecast range: revert towards climatological forcing, using different approaches: e.g. Mercator Océan and US-Navy, • use in house atmospheric model (e.g.: NOGAPS for the US-Navy systems), • run coupled systems (e.g.: NCEP RTOFS system for the tropical cyclones). Some OceanView OO systems are run by the met agencies themselves; they use their own NWP products. This is the case for example for the Metoffice (UK), NOAA/NCEP (USA), MRI/JMA (Japan), BoM (Australia) and EC (Canada). Some other systems rely on external NWP systems such as Mercator Océan (France), MFS (Italy), TOPAZ (Norway) using ECMWF products and NMEFC (China) using NCEP products. 16.3.1.3â•…Model Configurations Model configurations are needed to perform forecast, i.e. estimates of the ocean state and its evolution in the future. The target of this function is to provide forecast
16â•… Overview Global Operational Oceanography Systems
403
estimates which are better than the climatology and better than persistency (use the nowcast estimate and assume no temporal evolution to predict the future). To be able to issue a forecast (and even a nowcast, since observations are available with some delay in real-time), an ocean numerical dynamical model is needed. In the past, some systems were developed based on simplified models (for example quasi-geostrophic ones), but nowadays, thanks to the increase of computing power and the emergence of communities developing state-of-the-art codes, all system use numerical OGCM solving the primitive equations. One key point here is that an OGCM code should be used and developed by a large community to be suitable for OO, preferably including scientists from the academic research community. For instance, the NEMO (Nucleus for European Modelling of the Ocean) code tends to be largely adopted in Europe by OO centres, thanks to shared effort to maintain this code in the scientific and operational community. Similarly, HYCOM (HYbrid Coordinate Ocean Model) is now adopted by the major OO centres in the USA, benefiting from investments of a large community. These OGCMs have several differences in their implementation, among which are the vertical coordinate systems, mixing schemes, boundary layer, turbulence closure, free surface, advection schemes. However, there is a commonality of needs: • These codes have to be implemented in realistic configurations, which implies having good bathymetry datasets (and for the global systems, grids solving the northern pole singularity, such as tripolar grids as illustrated in Fig.€16.2) • They have to be efficient in terms of computing and numerical schemes (e.g. implementing explicit parallelisation through Message Passing Interface (MPI), and domain decomposition techniques), because of the need to reduce elapsed time between the reference date of the last data entering the system, and the time at which the service is delivered to the user (timeliness). The horizontal resolution of these configurations may be eddy resolving, eddy permitting or low resolution depending on both the needs and the computing capacity. It is admitted that to get the eddies resolved (eddy resolving systems), the horizontal resolution should be at least 1/10°. These configurations are either basin scale or global, depending again on the needs and the capacity of the centres. The vertical resolution is generally enhanced near the surface, with a set of O(50 layers). Details of the model used in OO systems can be found in Barnier (2010), Chassignet (2010), and Hurlburt (2010) (references in the same book). Short range (a few days) synoptic forecasts are generally deterministic, with a single model run starting from the initial conditions obtained with the assimilation scheme (analysis), and forced by a synoptic atmospheric forecast. The forecast range is mostly limited by atmospheric forecast range (availability of synoptic forcing). However, forecast skill has been demonstrated for the ocean beyond the predictability scales of the atmosphere: e.g. US-Navy systems, Smedstad et€al. 2003 (up to 20–30 days), reverting to climatological forcing beyond the synoptic forecast range. In addition, some studies have shown that using non deterministic atmospheric forecast can improve predictability in the ocean (Drillet et€al. 2009).
404
E. Dombrowsky
Fig. 16.2↜渀 Example of grid used by Mercator and FOAM to solve the northern pole singularity. The grid is a tripolar ORCA grid (Madec and Imbard 1996). It is a classical Mercator grid up to a given latitude in the northern hemisphere. Then, it smoothly evolves toward a dipolar grid, the singularities being put on the continents. For the sake of visibility, only a sub sample of gridlines actually used is shown here. Similar tripolar grid is used for the US global HYCOM configurations
However, most ocean short term deterministic forecast are issued using deterministic atmospheric forcing. For longer timescales (from month to climate decadal runs), the OGCM is generally coupled with an atmospheric GCM, and eventually other earth system components models such as hydrology and ice, to ensure the interactions between all these components. Other methods such as ensemble (statistical) forecasts which are used in the atmosphere, or the merging of several forecast produced by several systems through super ensemble techniques (Krishnamurti et€al 1999) are not much developed yet in ocean forecasting centres. They will probably develop in the near future as a complement to synoptic deterministic forecast.
16â•… Overview Global Operational Oceanography Systems
405
16.3.1.4╅Efficient Assimilation Techniques In OO, the assimilation system is designed to fulfil two goals: (1) to provide the best initial conditions for the deterministic forecast, and (2) to provide the model trajectory that best fits the past observations, in order to have the best information possible about the state of the ocean in the past, eventually up to real-time. Consequently, the assimilation schemes merge the observations and the previous model forecast (also called the background state), taking into account their relative error characteristics through an analysis procedure which provide the best set of ocean variables on the model space necessary to initialize the model to run the forecast. Note that these two objectives are not necessarily compatible, and that the best model trajectory will not necessarily provide the best initial conditions for a model forecast. The first objective applies for the operational forecasting activities, while the second relates more to the reanalysis activity. Most analysis schemes are based on the Best Linear Unbiased Estimate (BLUE) theory such as Optimal Interpolation (OI), Kalman Filters (KF), variational algorithms (VAR) and their variants, such as the sequential smoother which take into account future observations in the analysis. These assimilation schemes are designed to compute the best set of weights to apply in a weighted average of the background state (previous forecast) and the observations, taking into account their error characteristics. All these schemes work as a succession of forecast/analysis cycles, creating a discontinuous trajectory, with a typical saw tooth shape at the update cycle interval. To avoid the corresponding shocks to the model trajectory, the Incremental Analysis Update (IAU, see Bloom et€ al. 1996) has been introduced in several operational assimilation systems. It basically consists of adding small part of the increment at each time step for a given period (typically from a few hours to a few days) as a forcing term, instead of adding the full increment once at a given time step. This leads to a smooth trajectory, limiting the transients after the analysis shock. The counterpart is that this trajectory is no longer following the governing equations of the fluid, during the IAU period. The 4D-VAR schemes are different since their goal is to get the best continuous model trajectory that fits the observations while keeping the physical balance: no statistical increment is added during model integration They consist of optimizing the trajectory changing some of the degrees of freedom of the system (such as initial conditions, atmospheric forcing, etc.), using convex optimisation (quadratic cost function minimisation). They require the usage of the adjoint model to reduce the minimisation cost (computation of the gradient of the cost function with only one forward and one backward integration of the model and its adjoint). There is currently no implementation of such a 4D-VAR method in OO forecasting centres. For more details about the assimilation theory and practical aspects, see Zaron 2010; Brasseur 2010 and Moore 2010 (references in the same book). There is an obvious competition between model resolution, and assimilation scheme complexity. One key aspect for OO of this function is to have a good balance between the computational cost of the model and the assimilation scheme.
406
E. Dombrowsky
For example, applying a O(100) members Ensemble Kalman Filter will multiply the model computational cost by O(100), while doubling the horizontal resolution would multiply the cost by O(10) only. Since model resolution is yet a key limiting factor in terms of forecast skill for a lot of application, most systems are based on assimilation schemes whose computational cost is the same order of magnitude as the one of the model alone. For example, the US Navy and Mercator Océan started with the development of regional high resolution models with fairly simple assimilation system, until the computer power allowed for implementation of more sophisticated assimilation methods and global models. The TOPAZ system in Norway is within the GODAE systems an exception to this. It has been implementing an advanced (costly) assimilation scheme (Ensemble Kalman filter) since the beginning, starting with a fairly modest horizontal resolution model configurations, and increasing it following the evolution of computing power.
16.3.2 Product Generation and Quality Monitoring Raw model products are generally files containing the ocean variables on the model computational grid. The computation grids are not necessarily adequate for most applications. They can be staggered (ARAKAWA grids), rotated, stretched, irregular, may vary with time (such as the isopycnal coordinates on the vertical in HYCOM). Consequently, post processing of the raw model output is generally needed to create numerical products that are more convenient for the users. This function can include remapping on standard grid, tiling, averaging, applying file name and format conventions, transforming model variables into user oriented products. This function is important to deliver the services to users. However, probably because the targeted users and the services offered are specific to each OceanView centers, there is a wide variety of implementation of this function in the OceanView systems. We have seen above that the functions (model, assimilation) that create the raw products used to deliver services are at the leading edge of the current scientific knowledge. In addition, the quality of the product depends on the quantity and quality of the input data. These two reasons are sufficient to imply the implementation of systematic product quality control and monitoring. There are mainly two timescales to consider: • short loop validation: there is a need to check model output and product before they are disseminated to the users, • long loop validation: the quality of the product can vary slowly, for example, biases in the water masses can appear after several months, or seasonal biases can also develop. The first category includes automated quality control such as comparison to predefined thresholds, and software assisted human expertise based for instance on automated graphic, indicators and metrics generation, and interactive viewing.
16â•… Overview Global Operational Oceanography Systems
407
For example, Mercator Océan operators follow routinely a control procedure to perform routine check of the products every time the forecast is performed. These controls are based on images and diagnostics automatically generated by the system that the operators compare to templates and thresholds provided in an operational validation manual. This allows them to check if the routine production is conforming to what can be expected from the system. In order to define the reference, the scientists in charge of the system development perform a long (at least a full year, preferably a multi-year) hindcast assimilation integration with an exact copy of the real-time system. This reference hindcast simulation is used to calibrate the different validation thresholds, and to define the reference which is provided to the operators for the real-time routine validation in the operational validation manual. Then, once the system is operated routinely, it takes less than 1€h (wall clock) on average to validate all the global and regional forecast products. In case of doubt, the operators can also further investigate the quality of the products visualizing the 4-D fields with an interactive quick look viewing software (allows also to compare with the observations, the climatology, to look at the forcing fields, to zoom in regions, at depth, etc…). In the ultimate case they detect an anomaly that they cannot solve with predefined procedures, they call the specialists (scientists) to further investigate the problem in order to solve it and resume as soon as possible to normal operations. The second category of quality control is based on regular studies, looking at regions, or specific phenomenon depending of what is happening in the real ocean, or on opportunities (for example a scientific campaign in one region). These studies can contain comparisons with other systems (as it has been done within GODAE, see Hernandez et€al 2009). In any case, these quality control studies have to be as exhaustive as possible (all regions, all seasons, and all phenomenons). This is extremely costly in terms of human resources, and can only be afforded if there is a strong involvement of the users and the scientific community.
16.4â•…Non Functional Aspects 16.4.1 Operational Resources Systems have to be operated on a routine base with constant and systematic monitoring of performances as stated in the Operational definition given in the introduction. This means that operational resources are necessary to ensure service continuity. This concerns the IT resources (computing, storage and network) but also the people. Staffs dedicated to these tasks are to be involved, with backups in case a person cannot go to work for various reasons. Such activity cannot be done with R&D type staff only and is not compatible with the “best effort” approach on the long run. In addition, the best R&D system may not be suitable for operational purposes. For example, a system that give very good results most of the time, but which does not always converge will not enable to deliver continuous service.
408
E. Dombrowsky
To implement an OO system, one has to define the performance target in terms not only of product quality (scientific performance) but also in terms of service availability. Operational activities include continuous measure of the effective performances relative to the target, and measure of the progress (at least non regression) made at each system upgrades.
16.4.2 Research and Development The gap between what we can achieve today (technology and science push) and the user’s need (user pull) is still large for some applications and in order to better match the user’s needs in terms of accuracy and performance, the OO systems have to be based on ingredients that are at the leading edge of the research such as stateof-the-art model and assimilation techniques as described above. Hopefully this gap can be (and is continuously) reduced, thanks to advances made in our knowledge and tools. It can be done if there are sufficient research efforts associated to the OO development. This is why most OceanView groups have strong internal R&D effort to support their operational systems.
16.4.3 User Involvement The OO systems are built to deliver services to users, their feedback on the service quality is very important to feed the virtuous loop of service continuous improvement. All the successful OO systems developed within GODAE have a strong user base. For instance, Navies, as supporting users, have played a major role for the development of OO during GODAE in several nations such as the US, Australia, Canada, UK and France. However, getting this feedback has to be organised, through a real engagement of the users in the OO development.
16.4.4 High Performance Computing Facility Running systems involving eddy resolving basin-scale model configuration with descent assimilation schemes require high performance computing facilities. For example, the 1/12° Mercator Océan global model has 3059â•›×â•›4322â•›×â•›50 grid points, i.e. O(109) grid points. A corresponding 3D array (for one variable, e.g temperature) represents 5.3€GB in computer memory. The total memory size of the corresponding system (assimilation and model) is of the order of 1€TB. This system is run weekly with a 2-week hindcast, with 2 analyses, it has a backward IAU (which means the model is run twice per assimilation cycle), and a 7-day
16â•… Overview Global Operational Oceanography Systems
409
forecast is then issued. It runs on 4 nodes (64 processors) of Météo-France NEC SX-9 machine. The model alone takes 1€h (wall clock) per week (time step for this non tidal model is 480€sec), each analysis takes 0.75€h (wall clock). This means a total of about 9€h (2 analysis, 35 model days and computation of diagnostics) elapsed every week to deliver the forecast. Doubling the resolution to better match the requirements of the users for which 1/12° is barely sufficient to resolve the physics they are interested in, would mean multiply by a factor of 4 the size (memory) and a factor of 8 the need for CPU time on the same computer. This illustrates the fact that computing resources is yet a key limiting factor to the development of global high resolution operational oceanography systems, and that high performance computing facilities are required to be able to run such systems. Hopefully the computing power increases regularly, following an empirical law, originally introduced by Moore in 1965 (known as Moore’s law) which generally admits that the available computing capacity doubles every 18 month.
16.4.5 Storage, Dissemination Capacity and Service Delivery The OO systems generate a huge amount of data that have to be physically stored to provide the services. For example, the volume generated by Mercator Océan global 1/12° systems is 0.5€TB per week and the total volume that will be archived in 2010 by Mercator Océan corresponds to 60€TB. Producing and archiving these data would be useless without enabling their fast and easy access to the users. This concerns the data produced regularly in real-time as well as past data that have to be archived in order to (1) deliver services on past data (a common user request), (2) assess the performances of the system upgrades on the long run, and (3) compute climatological datasets (an other common user request). The problem is not only a question of storage capacity, but also a question of efficient access to the data stored. This means that there is a need for efficient systems to enable the users to access directly in a timely manner to the information they require. Such big datasets cannot be delivered to anyone via simple “ftp-like” download, especially for those who don’t have a strong IT capacity. To make these data useful to users, efficient dissemination system based on (1) large live data repositories with fast access and large bandwidth internet capacity, (2) functions to help the user browse the archives (catalogue, inventory, documentation and meta-data), to preview the data, and to extract what they need (efficient sub setting, aggregating, extracting and remapping tools) need to be implemented. Hopefully, some technologies (such as like OPeNDAP/THREDDS, and LAS, associated with big data sharing systems) exist, are efficient and they are developing rapidly following the pace of the development of the OO systems.
410
E. Dombrowsky
In addition to software tools, a human organisation has to be put in place: the Service Desk (or Help Desk) to help the user with the data, to register the requests and make sure that the service is delivered. This means that a phone number, an e-mail address, the name of the person to call has to be delivered to the users. This means also that a specific organisation, with dedicated people committed to these tasks, has to be setup at the centres which disseminate the products. This is an operational function also.
16.5â•…Conclusions We have seen here that Operational Oceanography systems exist. They have been developed in several countries thanks to international coordination for the development of the ocean observing system, and thanks to coordinated actions within GODAE. Today, 13 systems—among which 3 are global eddy resolving systems—are routinely operated in operational centers, and provide routine high quality operational services to users. Other systems will emerge and join this international endeavor in the near future within GODAE OceanView. All these developments would not have been possible without the international effort to put the ocean observing system in place, including altimetry and ARGO autonomous in situ measurements, and the delivery in near real-time of high quality data from these observing systems. All the operational system developed to deliver real-time ocean forecast services are based on state-of-the-art ocean models and assimilation techniques that are at the leading edge of the existing knowledge in these disciplines. This means that OO could not be further developed without strong R&D efforts. Running forecast systems in an operational context implies to have a strict engineering approach to design and implement these systems. We have presented the major characteristics of the main scientific functions (e.g. model, assimilation) that are implemented currently, and more specifically the computation efficiency requirement that apply on these functions to be able to have a good trade-off between scientific performance and timeliness of the service delivery, which is very important in the OO context, probably not so important for academic research or reanalysis purposes.
References Bahurel P, Adragna F, Bell MJ, Jacq F, Johannessen JA, Le Traon PY, Pinardi N, She J (2010) Ocean monitoring and forecasting core services, the European MyOcean example. Proceedings of OCEANOBS’09 conference Bloom SC, Takacs LL, DaSilva AM, Levina D (1996) Data assimilation using incremental analysis updates. Mon Wea Rev 124:1256–1271
16â•… Overview Global Operational Oceanography Systems
411
Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanography 22(3):80–95 Drillet Y, Garric G, Le Vaillant X, Benkiran M (2009) The dependance of medium range northern Atlantic Ocean predictability on atmospheric forecasts. J Oper Oceanogr 2(2):43–55 Hernandez F, Bertino L, Brassington G, Chassignet E, Cummings J, Davidson F, Drévillon M, Garric G, Kamachi M, Lellouche J-M, Mahdon R, Martin MJ, Ratsimandresy A, Regnier C (2009) Intercomparison studies within GODAE. Oceanography 22(3):128–143 IGST (International GODAE Steering Team) (2000) The global ocean data assimilation experiment strategic plan. GODAE report no 6, Dec 2000 Krishnamurti TN, Kishtawal CM, LaRow TE, Bachiochi DR, Zhang Z, Williford CE, Gadgil S, Surendran S (1999) Improved weather and seasonal climate forecasts from multimodel superensemble. Science 285(5433):1548–1550. doi:10.1126/science.285.5433.1548 Le Traon PY, Bell M, Dombrowsky E, Schiller A, Wilmer-Becker K (2010) GODAE OceanView: from an experiment towards a long-term Ocean Analysis and Forecasting International Program. In: Hall J, Harrison DE, Stammer D (ed) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€ 2, ESA Publication WPP-306, Venice, Italy, 21–25 Sept 2009 Madec G, Imbard M (1996) A global ocean mesh to overcome the North Pole singularity. Clim Dyn 12:381–388 Moore GE (1965) Cramming more components onto integrated circuits. Electron Mag 38(8):114– 117 Ryder P (2007) GMES fast track marine core service, strategic implementation plan. Report from the GMES marine core service implementation group to the European commission GMES Bureau Smedstad OM, Hurlburt HE, Metzger EJ, Rhodes RC, Shriver JF, Wallcraft AJ, Kara AB (2003) An operational eddy-resolving 1/16° global ocean nowcast/forecast system. J Mar Sys 40– 41:341–361 Smith N, Lefebvre M (1997) The Global Ocean Data Assimilation Experiment (GODAE). Paper presented at the Monitoring the Oceans in the 2000s: an integrated approach, Biarritz, France, 15–17 Oct 1997
Other Lecture Notes (in the same issue) Cummings on QC, Ravichandran, Le Traon and Josey on input data, Barnier, Chassignet and Hurlburt on models Zaron, Brasseur and Moore on data assimilation
Chapter 17
Overview of Regional and Coastal Systems Jiang Zhu
Abstract╇ During the GODAE period, some coastal and regional systems for shortrange ocean forecasts in the Asia-Oceania have been developed. This paper first provides an overview of these operational forecast systems and some pre-operational systems developed by Australia, China, Denmark, India, Japan and Korea in the terms of model domain, resolutions, models, data inputs and data assimilation schemes. These systems cover some key ocean areas in Asia-Oceania. Then services, products, users and feedbacks provided by these systems are shown briefly. Some operational ocean analyses and forecasts support both data products and online graphical public services. Some systems such as the Bluelink ocean forecasting system of Australia have proved skilful in forecasting, coastally trapped waves, coastal upwelling, offshore ocean state and boundary currents of Australian coast. As evidence of the utility of these regional systems, some highlighted examples are also given. For example, some Japanese systems successfully predicted the Korushio large meander in 2004; an operational system provided service for the 2008 Olympic sailing events; a Japan Sea/East Sea forecasting system has been used for successful reproduction and prediction of large numbers of giant jellyfishes in Japan/Sea Sea. All these systems are strongly connected with the GODAE products. The Argo and GHRSST datasets are essential inputs for initialization of these forecast systems. The Indian Ocean is relatively less covered by these regional systems. However, GOOS-CLIVAR’s effort in establishing Indian Ocean Observing system will improve the situation and some of its progresses are highlighted. From the practices of testing and applying these regional and coastal systems, some scientific problems can be explored and important lessons can be learnt. In this lecture notes, as an example, we discuss the SST predictability and forecast error growth in China marginal seas. Based on a series of 7-day hindcast experiments over one year period (2006) with the initial conditions provided by assimilating SST and altimetry data, the root mean square errors (RMSEs) of the J. Zhu () Institute of Atmospheric Physics, Chinese Academy of Sciences, Qi jia huo zi, De sheng men wai Street, Beijing 100029, People’s Republic of China e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_17, ©Â€Springer Science+Business Media B.V. 2011
413
414
J. Zhu
hindcasted SST are less than 0.7°C in the shallow Bohai and Yellow Sea (BYS) with the forecast lead time up to seven days. In the East China Sea (ECS) where the energetic Kuroshio passes, the RMSEs of SST reach to 0.9°C as the lead time increases to 7 days. In the South China Sea (SCS), the 7-day hindcasts have a mean RMSE less than 0.6°C. The hindcast skills also show some seasonal dependence. It is found that in the SCS the skills are higher in the summer season and lower in the winter. Analysis of results shows that the hindcast RMSEs are strongly associated with the strong SST fronts and surface current jets which can introduce hindcast errors due to the horizontal advections. Some problems for further improvement are identified.
17.1â•…Introduction Coastal ocean nowcast/forecast systems are an increased interest in a wide range of society. For instance, improved forecasts of future states in target areas are required for marine management and rescues, pollution control, and mitigation of the damage of coastal flooding and harmful algal blooms, in addition to the traditional requirement of shipping navigation and fishery managements. The 2002 GODAE Development and Implementation Plan states: “Climate and seasonal forecasting, navy applications, marine safety, fisheries, the offshore industry and management of shelf/coastal areas are among the expected beneficiaries of GODAE.” The usefulness of GODAE systems to coastal and shelf seas forecasting will therefore be one of the measures of the success of the project. During the GODAE period, various coastal and regional systems for short-range ocean forecasts in the AsiaOceania have been developed. Operational forecast systems and some pre-operational systems developed by Australia, China, Denmark, India, Japan and Korea are some examples as Table€17.1. These systems cover some key ocean areas in Asia-Oceania. Bluelink is an Australian partnership between the Commonwealth Scientific and Industrial Research Organisation, the Australian Bureau of Meteorology (ABoM) and the Royal Australian Navy. The primary objective of Bluelink is the development of a forecast system for the mesoscale ocean circulation in the Australian region and adjacent basins of importance. The Bluelink forecast system (Brassington et€al. 2007) became operational at the ABoM in August 2007 providing seven day forecasts twice per week. In Japan, several systems have been developed. Japan Meteorological Agency (JMA) started operational use of new ocean analysis/forecasting system for western North Pacific in 2008 March. This system provides the information of the ocean state in the Sea around Japan. Monitoring and forecasting of major currents around Japan such as the Kuroshio, the Oyashio, and the Tsushima currents are being paid particular attention because variations of these currents strongly affect the ocean state around Japan. Outputs of this system are being used for various purposes. The current field in this system is used for prediction of the position of drifting
MRI.COM
Modified POMgcs
Kyoto U OGCM POM
HYCOM
BSHcmod MOM3
MOVE/MRI. COM-WNP
JCOPE1,2
Kyoto U
CAS
YEOS ESROM
NMEFC
OFAM
Bluelink
Two-way nesting
One-way nesting
One-way nesting
One-way nesting
One-way nesting
One-way nesting
Global model
Ocean data input
BoM’s operational global In situ T, S; SSHA; SST weather prediction model JMA’s operational atmospheric In situ temperature and salinity; along track SSHA; analysis; results of climate gridded SST forecasting model Along track SSHA; in-situ T, 6-hourly NCEP Global S; along track SST Forecast System or NCEP/ NCAR reanalysis SST (NGSST); gridded SSH; in-situ T, S. NMEFC’s mesoscale weather Gridded SST Argo profiles forecast ECMWF reanalysis mesoscale Gridded SSHA; gridded SST; weather forecast in situ T, S DMI weather forecast SST ECMWF reanalysis In situ temperature; gridded SSHA; SST
Table 17.1↜渀 A summary of ocean forecasting systems in the Asia-Oceania regions System name Ocean model Nesting strategy Atmospheric forcing
Kalman filtering 3DVAR
Nudging for SST and OI for profiles EnOI
JMA
3DVAR with vertical coupled TS-EOF modes; IAU 3DVAR with vertical coupled TS-EOF modes; IAU 4D-VAR
DMI KORDI
IAP
NMEFC
Kyoto U
JAMSTEC FRA
Agency/ Institution ABoM
Data assimilation scheme EnOI
17â•… Overview of Regional and Coastal Systems 415
416
J. Zhu
targets (e.g., oil spill). Also, the ocean state in this system is used for identifying causes associated with unusual sea level. Japan Agency for Marine-Earth Science and Technology (JAMSTEC) has started an operational ocean forecast experiment for Northwestern Pacific (Japan Coastal Ocean Predictability Experiment; JCOPE) in December 2001. Fisheries Research Agency (FRA) has been operated the first version of JCOPE ocean forecast system (JCOPE1) since April 2007 for management of fishery resources of Japan by coupling the JCOPE1 with ecosystem model (FRA-JCOPE). JAMSTEC has further developed the second version of the system (JCOPE2) with enhanced model and data assimilation schemes. Output of JCOPE2 is used for ship routing of oil tankers, fishery and drilling ships. Most recently Kyoto University has constructed a high-resolution 4-dimensional variational (4DVAR) ocean data assimilation system with the aim of developing an integrated monitoring of synoptic to meso-scale features observed in the mixed water region for the northwestern North Pacific which is one of the most energetic regions in the world oceans. They have demonstrated that the downscaling is a powerful approach towards the better nowcast and forecast of coastal circulations in the regions such as offshore of Shimokita Peninsula where vigorous interactions with the marginal seas occur. The National Marine Environment Forecast Centre (NMEFC) of China started its operational numerical ocean forecasting in 1990s. There are several operational forecast systems covering different domains: from large scale of western North Pacific to relatively small coastal regions such as the Bohai Sea. Their products are delivered to users across public, government and commerce. In 2006, Chinese Academy of Sciences (CAS) started developing a preoperational ocean nowcast/forecast system around the Chinese coast lead by the Institute of Atmospheric Physics (IAP). Since 2008, this system development has been enhanced further by multi-institution collaboration and establishing a long term in situ observation network of four offshore buoys that will be operated and maintained by Institute of Oceanography and Institute of South China Sea Oceanography. The aims of the system are to provide a test bed to explore the coast and shelf sea predictability and transit to operational agencies. Besides the above mentioned national efforts, international collaboration also plays an important role. The Yellow Sea is a semi-enclosed sea surrounded by China and Korea. Extensive research and cooperation have been carried out in this region both in national and international level by China and Korea. However, the existing works need yet to be integrated into a forecasting system. Major bottle-necks in developing the Yellow Sea monitoring-forecasting system are the lack of high quality, near real time weather forecasts, as well as coupled 3D ocean-ice models and operational infrastructure. With support from EU FP6 project YEOS (Yellow Sea Observation, forecasting and information System, 2007–2009, http://ocean.dmi.dk/yeos) and Danish Sailors’ Union, European weather and ocean-ice forecasting system has been applied to the Yellow Sea and a pre-operational weather-ocean-ice forecasting system has been demonstrated in a period covering Beijing Olympic Game 2008. A wide range of users have enjoyed the high resolution ocean and weather services provided by the YEOS information system. YEOS also improved data exchange between China, Korean and EU partners.
17â•… Overview of Regional and Coastal Systems
417
17.2╅Overview of Systems The geographic coverage of each system is shown in Fig.€17.1. Most of these systems use nesting schemes with different domain/resolution. The single domain for each system in Fig.€17.1 mainly represents one of its main focuses of each system. Most systems concentrated in the western North Pacific with centres in the Japan Islands, Korea Peninsula and Chinese coast. The Bluelink system, surrounding Australia, covers part of the Southern Ocean and the western Indian Ocean.
17.2.1 Bluelink The key elements of the Bluelink system are the Bluelink Ocean Data Assimilation System (BODAS; Oke et€al. 2005) and the Ocean Forecasting Australia Model
Fig. 17.1↜渀 The geographic coverage of each system in Asia and Oceania
418
J. Zhu
(OFAM), a global ocean general circulation model. OFAM is based on version 4.0d of the Modular Ocean Model (Griffies et€al. 2004), using the hybrid mixed layer model described by Chen et€al. (1994). OFAM is intended to be used for reanalyses and short-range prediction. The horizontal grid has 1191 and 968 points in the zonal and meridional directions respectively; with 1/10° horizontal resolution around Australia (90–180E, south of 17N). Outside of this domain, the horizontal resolution decreases to 0.9° across the Pacific and Indian basins (to 10E, 60Wand 40N) and to 2° in the Atlantic Ocean. OFAM has 47 vertical levels, with 10€m resolution down to 200€ m depth. The topography for OFAM is a composite of topography from a wide range of sources including dbdb2 (www7320.nrlssc.navy.mil/DBDB2 WWW/) and GEBCO (www.ngdc.noaa.gov/mgg/gebco/). Horizontal diffusion is zero. An ensemble optimal interpolation (EnOI) scheme is used for assimilating sea level anomaly from satellite altimetry, Argo profiles and satellite SST. For detailed description of the assimilation scheme see Oke et€al. (2008). The operational system, OceanMAPSv1.0b (Brassington et€al. 2007) produces a 9 day hind-analysis and 7 day forecast twice per week using the surface fluxes from the Bureau of Meteorology global weather forecasts. Near real-time altimetry is obtained from Jason-1 and Envisat and real-time SST is obtained from AMSR-E. In situ observations are sourced from the GTS and the GDAC’s for Argo and sorted for duplicates and automatically quality controlled. An operational graphical web service (http:// www.bom.gov.au/oceanography/forecasts) and a data product service is supported at the ABoM.
17.2.2 MOVE/MRI.COM-WNP MOVE/MRI.COM-WNP (Multivariate Ocean Variational Estimation system/Meteorological Research Institute Community Ocean Model—western North Pacific version) uses MRI.COM as ocean model. The layer thickness near the surface follows surface topography in this z-coordinate model (Hasumi 2006). For the nonlinear momentum advection, the generalized entropy-preserving scheme (Arakawa 1972) is used, which is based on the concept of diagonally upward/downward mass momentum fluxes along the sloping bottom. A biharmonic operator is used for horizontal turbulent mixing. A biharmonic friction with a Smagorinskylike viscosity (Griffies and Hallberg 2000) is used for momentum. The vertical viscosity and diffusivity are determined by the turbulent closure scheme of Mellor and Blumberg (2004). The model domain spans from 117E to 160°W zonally and from 15°N to 65°N meridionally. The horizontal resolution is variable: it is 1/10° from 117°E to 160°E and 1/6° from 160°E to 160°W, and 1/10° from 15°N to 50°N and 1/6° from 50°N to 65°N. There are 54 levels in the vertical direction with thickness increasing 1€m at the surface to 600€m near the bottom. Oceanic states at the open boundaries are replaced by those from a North Pacific model (MOVENP) with a horizontal resolution of 1/2° (one-way nesting). A sea ice model with the thermodynamics of Mellor and Kantha (1989) and the elastic-viscous-plastic
17â•… Overview of Regional and Coastal Systems
419
rheology of Hunke and Ducowicz (2002) is also applied. For more detailed information on the model see Tsujino et€al. (2006). The analysis scheme adopted in MOVE is a multivariate three-dimensional variational (3DVAR) analysis scheme with vertical coupled Temperature-Salinity (T-S) Empirical Orthogonal Function (EOF) modal decomposition (Fujii and Kamachi 2003). In this system, the model domain is divided into 13 subregions and vertical T-S EOF modes are calculated from the observed T-S profiles for each subregion. The 3DVAR results are inserted into the model temperature and salinity fields above 1,500€m by the incremental analysis updates (Bloom et€al. 1996). For more details see Usui et€al. (2006). In situ temperature and salinity profiles, satellite sea surface height (SSH) anomaly and sea surface temperature (SST) are assimilated. The temperature and salinity data including ARGO data are obtained from Global Temperature-Salinity Profile Program (GTSPP: http://www.nodc.noaa.gov/GTSPP/). The SSHA data is the near-real time along-track data of Jason-1 and ENVISAT obtained from Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO: http://www. jason.oceanobs.com/). The SST data is the Merged Satellite and In-situ Data Global Daily SST (MGDSST) produced by JMA. The assimilation run is implemented every five days, and the forecasting period is one month. The model is driven by wind stress and heat flux from the JMA’s Climate Data Assimilation System (JCDAS) in assimilation run and forced by the result of the climate forecasting model in forecasting run.
17.2.3 JCOPE1,2 The ocean model in the JCOPE1,2 systems is based on the Princeton Ocean Model with a generalized sigma coordinate (POMgcs) (Mellor et€al. 2002). A high-resolution regional model with a spatial grid of 1/12° and 47 vertical levels is embedded in a low-resolution model covering the North Pacific region (30S–62°N, 100E–90W) with a spatial grid of approximately 1/4° and 21 sigma levels. The inner model domain covers the western North Pacific (10.5–62N, 108–180E) and its lateral boundary conditions are determined from the basin-wide model using a one-way nesting method (Guo et€al. 2003). The wind stress and heat flux fields are calculated from output of 6-hourly NCEP Global Forecast System (NCEPGFS) or NCEP/NCAR reanalysis (Kalnay et€al. 1996). Salinity at the ocean surface is restored to the monthly mean climatology data (Levitus et€al. 1994) with a time scale of 30 days. The following observation data are assimilated into the model: the sea surface height anomaly (SSHA) obtained from the TOPEX/Poseidon and ERS-1 satellites during September 1999 to June 2002 and from the Jason-1 and the Geosat Follow-On during June 2002 to present; sea surface temperature (SST) obtained from the Advanced Very High Resolution Radiometer/Multi-Channel Sea Surface Temperature (AVHRR/MCSST) Level 2 products; and vertical profile data of temperature and salinity obtained from GTSPP. The JCOPE1 system (Miyazawa et€al. 2008a) uses a combination of optimum interpolation for horizontal
420
J. Zhu
gridding of SSHA, SST, subsurface temperature and salinity data and multivariate optimum interpolation for creation of three-dimensional grid data of temperature and salinity. The temperature and salinity data are introduced into the model using the Incremental Analysis Update (IAU) (Bloom et€ al. 1996). The JCOPE2 system (Miyazawa et€ al. 2008b) adopts the 3DVAR scheme with vertical coupled Temperature-Salinity Empirical Orthogonal Function modal decomposition (Fujii and Kamachi 2003).
17.2.4 Kyoto University System The numerical model used is the ocean general circulation model developed at Kyoto University (e.g., Toyoda et€al. 2004), which employs the hybrid σ-z vertical coordinate to better simulate free surface motion of the ocean. To further enhance the representation of the upper ocean circulation, this model adopts some sophisticated parameterizations such as the Takano-Onishi scheme for the momentum equation (Ishizaki and Motoi 1999), the turbulence closure scheme for the mixed layer parameterization (Noh et€ al. 2005), the 3rd order scheme (Hasumi 2000) based on QUICKEST (Leonard 1979) for vertical advection, UTOPIA (Leonard et€al. 1993) for horizontal advection and the isopycnal mixing scheme (Gent and McWilliams 1990; Griffies 1998). The model basin covers the northwestern North Pacific which includes the Sea of Japan and other marginal seas (see Fig.€17.1). The basic model resolution is 1/6° and 1/8° in longitude and latitude, respectively, and 78 vertical levels spaced from 4€m near the sea surface to 500€m at the bottom. Note that 67 vertical levels are set above 1,000€m depth. For the downscaling experiment focusing on the coastal circulation off Shimokita Peninsula, they employ the triple nesting approach from the above basic resolution (nest-1) to the finest resolution of 1/54° and 1/72° (nest-3) in longitude and latitude, respectively, via a medium resolution of 1/18° and 1/24° (nest-2). The nesting technique employed here is based on Oey and Chen (1992). The assimilated elements in this study are satellite-derived SST and sea surface height (SSH) data, and in-situ observation data of temperature and salinity. In detail, the SST data are from the New Generation Sea Surface Temperature (NGSST) produced at Tohoku University with a horizontal resolution of 1/20° and daily coverage. The SSH data are from the SSalto/Duacs gridded absolute dynamic topography, which is the sum of the sea level anomaly derived from multi altimeter data and mean dynamic topography (Rio and Hernandez 2004) provided by AVISO in time intervals of 3.5 days. The horizontal resolution is nearly 1/3° and the SSH data are interpolated onto the model grids for assimilation. The in-situ observation data are obtained from GTSPP. A 4D-VAR approach, optimized 4-dimensional datasets are sought by minimizing a cost function, in which the initial condition of model variables is chosen as the control variable. The cost function is composed of the background and the observational term. For more details, see Awaji et€al. (2003) and Ishikawa et€al. (2009).
17â•… Overview of Regional and Coastal Systems
421
17.2.5 NMEFC System for Western North Pacific The NMEFC’s operational forecast system for the western North Pacific (99°– 150°E; 2°–45°N) uses POM with a horizontal resolution of 1/4° by 1/4° and 15 vertical levels. The model is driven by NMEFC’s forecasts of a mesoscale atmospheric model. The data assimilation system utilizes OI for in situ observations (ship reports and Argo profiles) and a nudging scheme for MGDSST. The system produces a daily 3-day forecast of SST of the western North Pacific.
17.2.6 CAS Preoperational System A Chinese shelf/coastal seas model based on a three-dimensional hybrid-coordinate ocean model (HYCOM, Bleck 2002; Chassignet et€ al. 2003) is used to simulate ocean circulation around China (Xiao and Zhu 2007). A curvilinear horizontal grid is utilized with an average horizontal resolution of 13€km. There are 22 layers in the vertical coordinate. Using the real topography, the model domain includes the whole CSCS and part of the West Pacific Ocean (see Fig.€17.1). The model was forced by ECMWF 6-hourly reanalysis dataset (Uppala et€al. 2005). And the lateral open boundary conditions were provided by an India-Pacific domain HYCOM simulation (1/4° resolution). The data assimilation system has two alternatives: an EnOI scheme and the ensemble Kalman filter (Wan et€al. 2008). The observations used for assimilation include GHRSST products, Argo profiles from GTSPP and SSHA from AVISO. Xie et€al. (2008) compared several GHRSST products in the studied area.
17.2.7 YEOS The Yellow Sea 3D ocean-ice prediction system is based on DMI (Danish Meteorological Institute) BSHcmod, which is a coupled hydrostatic 3D ocean-ice model based on primitive ocean equations, and a Hibler-type sea ice model (Dick et€al. 2001). The baseline set-up operates on two nested, coupled grids, making possible to zoom in on an area of interest with high resolution. Multi-sensor SST data are assimilated into the model daily by using a simplified Kalman-Filter Scheme (Larsen et€al. 2007). The ocean model is driven by atmospheric forcing, river runoff and lateral boundary conditions. Along open boundaries, sea level is specified as a combination of the tidal variation and wind/pressure driven surge. Temperature and salinity are obtained from monthly climatology. Outflowing water is stored in a buffer zone, which must be emptied before water of climatological properties are advected into the model domain. The hourly meteorological products in 7.5€km horizontal resolution are derived from DMI operational NWP model HIRLAM. The 3-D ocean model domain covers the Yellow Sea and part of the East China sea, north of 33½°N
422
J. Zhu
and west of 127°E, with horizontal resolution of 1/20° (lat.) by 1/15° (lon.) and 30 vertical levels. This is then embedded in a coarse-grid, 2-dimensional external surge model of larger extent. The undisturbed top layer thickness is 6€m in order to avoid tidal drying of the top layer, in which case the model would not be stable. The layers below are of 2€m thickness, with increasing layer thickness towards the sea bed.
17.2.8 ESROM ESROM (East/Japan Sea Regional Ocean Model), based on the GFDL MOM3 (GFDL Modular Ocean Model version 3; Pacanowski and Griffies 1999), has been driven by monthly mean wind stress from the ECMWF reanalysis and monthly mean heat flux calculated by bulk air-sea flux formulation using ECMWF reanalysis meteorological variables. The surface salt of the model has been restored to that of the WOA hydrographic data. For the open boundary condition, a radiation condition with a nudging term for inward boundary fluxes is applied for the tracers and barotropic currents (Marchesiello et€al. 2001). The barotropic velocity through the Korea Strait is given by the volume transport monitored by the submarine cable. Using the 3-dimensional variational assimilation routine (Weaver and Courtier 2001), the satellite-borne sea surface temperature, sea surface height (SSH) anomaly and temperature profiles have been assimilated. Kim et€al. (2009) could verify the performance of the ESROM using an independent measurement dataset by the Pressure-equipped Inverted Echo Sounder in the Ulleung Basin located in the western side of the East/Japan Sea and suggested that the ESROM well reproduced the mesoscale variability as well as the general circulation.
17.3╅Highlighted Examples 17.3.1 Successful Prediction of the Kuroshio Large Meander Some Japanese systems successfully predicted the Korushio large meander in 2004. Figure€17.2 shows the observations and forecasts of the Korushio main path in summer of 2004. The forecasts were made by MRI system.
17.3.2 Operational Forecast for 2008 Olympic Games An operational demonstration of weather-ocean-wave forecast has been made during August-September 2008 for Beijing Olympic Games. The weather and ocean forecast were made by DMI and wave forecast was made by the First Institute of Oceanography (FIO) by using FIO wave model WAM forced by DMI on-line weather data. The general results are shown on YEOS website. An interesting event
17â•… Overview of Regional and Coastal Systems
36N
ASSIM (01 JUN 2004)
423
ASSIM (15 JUL 2004)
ASSIM (25 AUG 2004)
34N 32N 30N 28N
100 cm/s
FCST (15 JUN 2004)
FCST (25 AUG 2004)
forecast
130E
134E
20
40
138E
60
80
100
120
140
Fig. 17.2↜渀 The Kuroshio large meander formation in 2004. Top (↜bottom) panel shows the time sequence of the assimilated (predicted) current fields at 100€m depth. Vector and shade denote the current vector and its magnitude. The prediction stars on 1 June 2004 using the assimilated field as the initial condition
happened on the 49er Medal course, at 16 in 17 August 2008. There is an abrupt change of the states of weather and sea. The mast of Danish sailors’ sailing boat was broken due to a gust and rough sea. By borrowing a boat from the Croatian team, the Danes still won the gold medal. Figure€17.3 shows the forecast given by a fine resolution model (in a horizontal resolution of 2.5€km resolution) downscaled from YEOS forecast.
17.3.3 Prediction of Giant Jellyfishes Recently large amount of the giant jellyfish came over from the East China Sea to the Japan Sea through the Tsushima Strait, and they migrated to the Japanese coast along the coastal and offshore branches of the Tsushima Warm Current. Especially in 2005 season, excessively large number of jellyfish was observed for a long period (from early July until March), and serious damages to fisheries occurred. To reproduce and predict the distribution of the jellyfish in the Japan Sea a numerical simulation for the migration of the Jellyfish in 2005 was carried out by using a data assimilation model. The assimilation model was the Japan Sea Forecasting System
424
J. Zhu
Fig. 17.3↜渀 High resolution Wind forecast on 17 August 2008. Upper: winds time series (in Qingdao local time) at the race area of 49er Medal course of OL2008 in Qingdao; bottom: surface winds distribution at 2008-08-1708GMT in Qingdao waters
developed by RIAM, Kyushu University (Hirose et€al. 2007). Passive tracers were released from the both east and west channels of the Tsushima Strait at the depth of 0–22.5€m as artificial jellyfishes. Based on the sightings report around the Tsushima Island, the beginning of the tracer input was set on June 23. After August 16, the calculation was carried out by using predicted current data. The reproduction of the northward migration of the jellyfish was very successful; the northward front position of the jellyfish was in good agreement with sightings data within a few days, as shown in Fig.€17.4.
17.3.4 R eproducing the Model Transition of Coastal Waters off Shimokita Peninsula Coastal waters off Shimokita Peninsula (see Fig.€17.5) with a complicated geographic feature well reflects the major features of short-term (a few days) to seasonal-
17â•… Overview of Regional and Coastal Systems
425
Fig. 17.4↜渀 Sightseeing observations of giant jellyfish (↜left) and tracer forecasts (↜right)
long circulations. A notable feature in this region is the vigorous interactions with the marginal seas, particularly with the Sea of Japan through the energetic Tsugaru Warm Current (TWC). In fact, coastal currents off Shimokita Peninsula exhibit a seasonally-varying circulation characterized by a transition of the TWC between two distinct patterns: a straight path along the east coast of Honshu Island in the cold season (hereafter “straight-path mode”) and a swirl-like circulation in the warm season (hereafter “gyre-mode”). In et€al. (2008) conducted a hindcast experiment for the year 2003 using a triple nesting approach. As a result, the system successfully reproduced the observed transition from the costal mode to the gyre mode and the subsequent opposite one. Figure€17.5 displays the time series of temperature distributions at 200€m depth, in which warm water of over 7°C that corresponds to the TWC water can be seen near Shimokita Peninsula in both the observation and the model. Interestingly, the warm water begins to extend offshore to form a gyre-like distribution.
17.3.5 N on-tidal Coastal Sea Level for Flood Warnings and Port Management in Australia Accurate and timely warnings of anomalous coastal sea level are critical for the coordination of emergency response and port management. Extremes in coastal sea level frequently occur along the Australian coastline through the combined effects of high tides, storm surge and many other local and non local coastal processes across a range of time and space scales. During the Austral winter-spring period, cold fronts from the Southern Ocean propagate over the Great Australian Bight (GAB) and other southern regions of Australia producing large coastal surges which
426
J. Zhu
Fig. 17.5↜渀 Time series of temperature distributions at 200€m depth. (↜left): model and (↜right) observation. (From In et€al. 2008)
can be of O(1)€m. The resulting surge can propagate along the southern coastline as a free or forced coastally trapped wave. An example in October 2007 is shown in Fig.€17.6 where the surge formed in GAB has propagated eastward to record a 0.7€m non-tidal sea level in the Gulf of St Vincent. The coastal trapped wave represented in BLUElink in this example has a wavelength of approximately 500€km and a period of approximately two days.
17â•… Overview of Regional and Coastal Systems
427
Fig. 17.6↜渀 Sea level anomaly in the Great Australian Bight off South Australia at 21:00, 29th October 2007 from the BLUElink OceanMAPSv1.0b operational system
Fig. 17.7↜渀 The root mean square error of 24€h averaged non-tital sea level from BLUElink OceanMAPSv1.0b compared Australian coastal tide gauges for a 6 day forecast period
The non-tidal sea level in coastal and shelf regions are frequently governed by a wide range of processes that are modelled in the BLUElink system including stormsurge, coastally trapped waves, boundary currents and eddies. BLUElink does not currently resolve tides, wind-waves, swell or river discharge. Evaluations from both the BLUElink reanalysis (Oke et€al. 2008; Schiller et€al. 2008) and the operational BLUElink OceanMAPSv1.0b (Brassington et€al. 2007) indicate the non tidal sea level of these systems has low RMS errors and good correlations with coastal tide gauge (CTG) time series around the Australian coastline. Figure€ 17.7 shows the RMSE scores for the forecast system over the 6 day forecast period compared with Australian tide gauges. During the period where the atmospheric forecasts have skill O(48)€h the non-tidal sea level forecasts have a RMSE range from 4 to 9€cm. At longer lead times the RMSE grows indicating that the error has not reached saturation. Longer periods of skill can be obtained from the coastal ocean processes.
428
J. Zhu
For example, coastally trapped waves generated in the GAB continue to propagate along the east coast of Australia reaching northern Queensland approximately 5 days after the initial disturbance. Coastal boundary currents and coastal eddies also contribute to anomalous coastal sea level over longer periods, particularly where the continental shelf is narrow. BLUElink has demonstrated useful forecast skill at combining the leading processes influencing non tidal coastal sea level.
17.4â•…SST Predictability and Forecast Error Growth in China Marginal Seas 17.4.1 Circulation in China Marginal Seas and Model Bias The seasonal circulation and dynamic processes of the marginal seas around China are mainly controlled by the monsoonal wind force and the impact of Western boundary currents. The monthly mean SST of the two monsoon seasons simulated by the model is illustrated in Fig.€17.8 for the BYS and ECS. The climatologic SST observations (China Ocean Press 1992) are also shown. As one of largest shelf seas of the world, the BYS and ECS has a total area of 1.25€million€km2, where most water depths are less than 100€m. Flowing on the shelf break in the ECS, Kuroshio carries warm, saline water from the tropics. In winter this warm water meets with the cold shelf water that is formed due to a huge amount of heat loss to the atmosphere as the northerly monsoonal wind brings in cold and dry air from the continent. A sharp SST front (the Kuroshio Front) therefore forms between the warm Kuroshio and the cold shelf water. The SST distribution in the shelf closely follows the bathymetry. Two mechanisms have been proposed to explain this winter SST distribution pattern. One theory believes that the horizontal advection plays an important role. One evidence is that the northward warm tongue in the Yellow Sea is consistent with the location of the Yellow Sea Warm Current which carries the warm water from west of Cheju Island. For more discussion, see Ichikawa and Beardsley (2002) for a review. An alternative mechanism proposed by Xie et€al. (2002) is that water properties are well mixed up to 100€m deep due to intense surface cooling in winter, ocean depth thus has a strong influence on SST of the continental shelf, leading to a remarkable collocation of warm tongues and deep channels. In winter the model simulation can capture the main SST distribution features such the Kuroshio Front and the Yellow Sea Warm Tongue because the driving factors mentioned above is well presented in the model. In summer, the SST distribution is much more homogenous than that in winter. The Kuroshio Front is still there but its intensity is much weaker than in winter. Most SSTs in the BYS are between 24–26°C. SST is generally getting warmer towards the coast. Some local cold waters appear, especially near the southwest and the northwest coast of Korea, the seaward Changjiang runoff, the coastal waters of China and cold water in the north Yellow Sea. In summer,
17â•… Overview of Regional and Coastal Systems
429
Fig. 17.8↜渀 Observed and simulated mean SST in the winter and the summer over the Yellow Sea and the East China Sea
it is known that tidal fronts are formed in a boundary between well-mixed and stratified regions by the tide-induced mixing and are found along the west coast of Korea. Moon (2005) showed that the tide-induced mixing is to large extent responsible for the SST fronts along the west coast of Korea. Since our model does not contain the tides currently, the simulated SST missed the observed tidal fronts along the west coast of Korea. Generally there are about 2°C of warm bias
430
J. Zhu
in the simulation while in the summer the warm bias is about 1°C. And the bias can cause serious problems when using the model to produce a forecast without assimilating observational data.
17.4.2 Hindcast Experiment The aims of performing a hindcast experiment are two-fold. First we can evaluate the forecast system and identify problems for following trouble-shooting work. Factors influencing the accuracy of an ocean forecast include the errors in the forecasted atmospheric forcing, the ocean initial condition and the model errors. Using the atmospheric reanalysis as atmospheric forcing allows us to minimize the impact of errors in the atmospheric forcing and to examine the performance of the ocean model and its data assimilation scheme. It is a necessary step before the system’s applications to operational forecasting. On another hand, the results from such a hindcast experiment can help us to understand more about the ocean predictability. The hindcast experiment started at the beginning of 2006 and ended on the last day of 2006. A 7-day forecast was conducted every 3 days with assimilation of the thinned FSTIA SST data once a day and the along-track SLA data once every 3 days. Figure€17.9 shows a schematic of the setups of the hindcast experiment. The forecasted SST is evaluated against the same FSTIA SST data by calculating the room mean square errors (here we made an assumption that the FSTIA SST data is the truth) over the whole model domain and three interested sub-domains: BYS (Bohai/Yellow Sea), ECS (East China Sea) and SCS (South China Sea). To compare with the data assimilative model forecast, we also made a simple statistical forecast using persistence a trivial predictor. The SST hindcast errors are compared against posterior SST data, with respect to the skill of persistence. The SST root mean square errors (RMSEs) over BYS, ECS and SCS are displayed in Fig.€17.10 along with that from the persistence. At 7- day forecast
SST
SST
SLA SST
1
2
3
SST
SST
4
5
analysis
SLA SST 6 7 Time (day)
7- day forecast
8
9
Fig. 17.9↜渀 The hindcast experiment design and setup. FSTIA daily SST product is assimilated every day at 00:00 time. Along track Jason SLA data is assimilated every 3 days at 00:00 time. The hindcast experiment is run over the whole year of 2006
spring
summer
fall
winter
0
0.5
1.0
0
0.5
1.0
0
0.5
1.0
0
0.5
1.0
0
0.5
1.0
0
1
2
3 4 5 Lead time (day)
Presistence Hindcast
6
7
0
1
2 3 4 5 Lead time (day)
East China Sea
6
7
0
1
2 3 4 5 Lead time (day)
South China Sea
6
Fig. 17.10↜渀 The spatially averaged root mean square errors (unit: °C) of the SST hindcast as a function of the forecast lead time over the four seasons and the whole year of 2006, respectively
1y
Bohai / Yellow Sea
17â•… Overview of Regional and Coastal Systems 431
432
J. Zhu
the beginning of hindcasting, the RMSEs are zero for the persistence while the SST RMSEs in the model initial conditions are also small. As expected it demonstrates the benefit of data assimilation on the accuracy of the initial conditions comparing to model simulations without data assimilation. However the misfit of the initial SST field to the data shows there is room for further improvement on the data assimilation results. Over the whole model domain, the hindcast errors grow as the lead time increases. During the first two days the hindcast errors increases sharply from 0.3°C to 0.6°C. This indicates there are initial shocks on the model forecast that follow the assimilation by possible creation of gravity waves or other reasons. Then the growth slows down and the RMSE gradually reaches around 0.7°C. After 2–3 days, the hindcast beats the persistence. This shows that the model has an added value to the hindcast. The hindcast skills show seasonal dependence with the best skill in the fall season. The low hindcast skill in BYS during the summer is also obvious. The reasons will be discussed in the next section. Model hindcasts also show different skills in different regions. The order according to the skill from high to low is SCSâ•›>â•›BYSâ•›>â•›ECS in general. The lowest hindcast skill is in BYS during the summer. This region- and season-dependence is also supported by the skills of persistence. This fact indicates that to some extend the model hindcast skills are influenced by underlying dynamical processes in different regions and seasons and reflect the seasonal and regional short-term SST predictabilities. The results from two hindcast cases are shown in Figs.€17.11 and 17.12. During the period from June 21 to 27, SST over ECS increases sharply as indicated by the northward movement of the 24°C isoline near the coast. A cooling event and its
Fig. 17.11↜渀 An example of SST hindcast in June, 2006. The scale of the color bar is °C
17â•… Overview of Regional and Coastal Systems
433
Fig. 17.12↜渀 The same as Fig.€17.10, but for another example in September, 2006
hindcast are shown in Fig.€17.11. The 28°C isoline moves southward during September 8 to 14 and the water with SST less than 28°C covers all the shelf area on September 14. The model hindcasts capture the two events successfully while the persistence cannot.
17.4.3 Hindcast Error Distributions We further explore the predictability of SST in the marginal seas around China. It is beyond the scope of this paper to investigate all aspects of predictability. We only focus on identifying where the SST forecast errors grow fast in summer and winter. Figure€17.13 shows the spatial distribution of the averaged RMSE of a 6-day hindcasts in winter and summer. The large errors in winter mainly locate in the Kuroshio path in the ECS, in the Luzon Strait, off the Vietnam coast along 110E and in the Taiwan Strait. This explains that the hindcast skills are lower in ECS than that in SCS and BYS as shown in Fig.€17.9. In the summer, the large errors mainly locate at coast and shelf regions and northeast of Taiwan. And thus cause the low hindcast skill in BYS during the summer. Because the mixed layer is very shallow in BYS (about 8€m according to Chu et€al. 1997) and a strong stratification exist beneath the mixed layer in summer, the SST is very sensitive to the atmospheric forcing and has large day-to-day changes there. The atmospheric forcing errors could be one of the main factors that cause the large hindcast errors in BYS during summer via the
434
J. Zhu
Fig. 17.13↜渀 The spatial distributions of the averaged RMSE of 6-day hindcasts. The scale of the color bar is °C. The errors in the water shallower than 40€m are marked out due to relative large errors in the validation data
thermal effect. The errors in wind have strong impact on the coast SST hindcast due to the upwelling and downwelling processes. Another reason may be the lack of tides in the model. As mentioned before, the tidal mixing has large impact on summer SST distribution in BYS. Apart from the errors in the atmospheric forcing and the ocean mixing, the errors in horizontal advections also have large impacts on the hindcast skills. Since the horizontal advection is determined by the inner product of the surface current vector and the SST spatial gradient. Figure€17.14 shows the spatial distribution of absolute
Fig. 17.14↜渀 The spatial distributions of the temporally averaged absolute values of the local SST spatial gradients over the winter and the summer of 2006. The SST analysis is used to calculate the gradient. The scale of color bar is 10−4°C/m
17â•… Overview of Regional and Coastal Systems
435
values of the temporally averaged local SST spatial gradients over the winter and the summer of 2006. Instead in winter a strong gradient appears in the shelf between Hainan Island and Taiwan, Kuroshio path associated with the shelf break in ECS, Changjiang River estuary, and the area off the southwest coast of Korea. In summer the gradient is weaker than that in winter and shows its strength mainly around the Korea coast. The front patterns agree well with the previous studies based on multiyear remote sensing SST data (Hickox et€al. 2000; Wang et€al. 2001). The strong SST gradients agree well with the large hindcast error distributions in Fig.€17.12 especially in the Kuroshio path and in the shelf between Hainan Island and Taiwan. However, off the east coast of Vietnam and the Luzon Strait large hindcast errors exist but the SST gradient is only moderate. Considering the strong current there (e.g., Fig.€17.2 of Li et€al. 2010), the horizontal advection errors may also cause a large hindcast error.
17.5╅Summary and Outlook During the GODAE period, several regional operational and preoperational systems have been developed in the Asia-Oceania region. These systems have demonstrated their usefulness via providing routine service for public, government and commerce users or by successful forecasting/hindcasting high impact (socially, economically and scientifically) events. All these regional systems have strong connections with the GODAE products. The Argo and GHRSST datasets are essential inputs for initialization of these forecast systems. The large scale GODAE products are also used to provide side boundary conditions (e.g., MOVE-NP). A 15 year ocean reanalysis from BLUElink has proved to be useful for engineering design. For example the modelling of internal waves through downscaling assisted the planning of the successful search of HMAS Sydney which was sunk during World War II. The BLUElink operational forecasts have also demonstrated to have good skills at forecasting a wide range of phenomena including: extreme coastal sea level, anomalous currents impacting offshore oil and gas operations, anomalous heat content over the North West Shelf influencing continental rainfall and many other processes. Most of these systems have been developed in GODAE related projects. For example, the 3D ocean forecasting system DMI BSHcmod has been continuously developed in the projects like in the MERSEA and ECOOP projects. The MERSEA Baltic Sea forecasting system is based on the same model. It is still a challenging task to further develop the existing systems from science perspective (De Mey et€al. 2009). Establishing more observation networks, increasing model resolution, adding sea ice model, using more advanced data assimilation and coupling with atmospheric models are among the near future activities. For example, in the near future, JMA will introduce an assimilation scheme of sea ice
436
J. Zhu
concentration to MOVE/MRI.COM-WNP, which would yield to some improvements not only in the sea ice extent but also in the ocean state of the subarctic region, especially in the Okhotsk Sea. JMA is also planning to develop a coastal ocean modeling/assimilation system using a high-resolution model with a horizontal resolution of a few kilometers that is intended for a possible operational use in JMA’s forecasting and warning systems for the coastal region of Japan. BLUElink through a follow-on research project will also introduce an upgraded reanalysis and operational prediction system (mid-2010) and introduce a new coupled regional oceanatmosphere system for tropical cyclone prediction. The global prediction system will enhance the eddy-resolving region to include the Indian Ocean and South Pacific and will also include advances in data assimilation, initialization scheme and atmospheric fluxes. Another challenge is how to further apply the achievements in Asia-Oceania operational activities comes from high-level decision-makers. The Indian Ocean is relatively less covered by these regional systems. GOOS-CLIVAR’s effort in establishing the Indian Ocean Observing system (IndOOS) will improve the situation. Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction, a new observational Multi-national network designed for Indian Ocean, a subset of IndOOS, similar to TAO (Pacific) and PIRATA (Atlantic), aims to address outstanding scientific questions related to Indian Ocean variability and the monsoons. 22 out of 46 moored buoys are already occupied (McPhaden et€al. 2009). On the other hand, Southeast Asian countries also urgently requires such a service for storm-surge forecast, coastal engineering and disaster prevention etc, and some existing systems such as YEOS system are ready to be extended to cover the entire NW Pacific coastal/shelf seas. There are encouraging signs of more Asia-Oceania countries plan to develop operational systems. A study on pre-operational oceanographic system will be funded by the Ministry of Land Transport and Ocean Affairs of Korea from next year. They will start to produce data products of coastal and environmental forecasts for the coastal waters around Korea. From a series of SST hindcast experiments in China marginal sea, we found that several following-up works are necessary. The initial conditions provided by data assimilation seem to have room to further reduce the analysis misfit to data. The causes of the visible initial shocks after the assimilation should be further investigated. Apart from the generation of gravity waves, other reasons should also be thought. Counillon and Bertino (2009) found a data assimilation set-up that produces little noise that is dampened within two days, when the model is pulled strongly towards observations. Part of it is caused by density perturbations in the isopycnal layers, or artificial caballing. Because the model used by them is also HYCOM, their results are very suggestive. The tide-induced mixing has strong impact on the thermal field in BYS. A higher resolution model setup than the present one, with tides is now running and will be used to perform new hindcast experiments. Acknowledgements╇ Part of this lecture note comes from Zhu et€al. (2008) to which co-authors: Toshiyuki Awaji, Gary B. Brassington, Norihisa Usuii, Naoki Hirose, Young Ho Kim, Qinzheng Liu, Jun She, Yasumasa Miyazawa, Tatsuro Watanabe and M. Ravichandran have contributed greatly.
17â•… Overview of Regional and Coastal Systems
437
References Arakawa A (1972) Design of the UCLA general circulation model. Numerical simulation weather and climate, technical report. No. 7. Department of Meteorology, University of California, Los Angeles, p€116 Awaji T, Masuda S, Ishikawa Y, Sugiura N, Toyoda T, Nakamura T (2003) State estimation of the North Pacific Ocean by a four-dimensional variational data assimilation experiment. J Oceanogr 59:931–943 Bleck R (2002) An oceanic general circulation model framed in hybrid isopycnic-Cartesian coordinates. Ocean Model 4:55–88 Bloom SC, Takacs LL, da Silva AM, Ledvina D (1996) Data assimilation using increment analysis updates. Mon Weather Rev 124:1256–1271 Brassington GB, Pugh T, Spillman C, Schulz E, Beggs H, Schiller A, Oke PR (2007) BLUElink> Development of operational oceanography and servicing in Australia. J Res Pract Inf Technol 39:151–164 Chassignet EP, Smith LT, Halliwell GR, Bleck R (2003) North Atlantic simulations with the Hybrid Coordinate Ocean Model (HYCOM): impact of the vertical coordinate choice, reference pressure, and thermobaricity. J Phys Oceanogr 33:2504–2526 Chen D, Busalacchi AJ, Rothstein LM (1994) The roles of vertical mixing, solar radiation, and wind stress in a model simulation of the sea surface temperature seasonal cycle in the tropical Pacific Ocean. J Geophys Res 99:20345–20359 China Ocean Press (1992) Marine Atlas of Bohai Sea, Yellow Sea, and East China Sea, hydrology. China Ocean Press, Beijing, p€524 Chu PC, Fralick CR Jr, Haeger SD, Carron MJ (1997) A parametric model for the Yellow Sea thermal variability. J Geophys Res 102(C5):10499–10507 Counillon F, Bertino L (2009) Ensemble optimal interpolation: multivariate properties in the Gulf of Mexico. Tellus-A 61:296–308 De Mey P, Craig P, Kindle J, Ishikawa Y, Proctor R, Thompson K, Zhu J, CSSWG (2009) Applications in coastal modeling and forecasting. Oceanography 22(3):198–205 Dick S, Kleine E, Müller-Navarra S, Klein H, Komo H (2001) The Operational Circulation Model of BSH (BSHcmod). Model description and validation. Bericte des Bundesamtes für Seeschiffahrt und Hydrographie. Nr. 29/2001. Hamburg, Germany, p€48 Fujii Y, Kamachi M (2003) Three-dimensional analysis of temperature and salinity in the equatorial Pacific using a variational method with vertical coupled temperature-salinity empirical orthogonal function modes. J Geophys Res 108(C9):3297. doi:10.1029/2002JC001745 Gent PR, McWilliams JC (1990) Isopycnal mixing in ocean circulation models. J Phys Oceanogr 20:150–155 Griffies S, Hallberg RW (2000) Biharmonic friction with a Smagorinsky-like viscosity for use in large-scale Eddy-permitting ocean models. Mon Weather Rev 128:2935–2946 Griffies SM (1998) The Gent–McWilliams skew flux. J Phys Oceanogr 28:831–841 Griffies SM, Harrison MJ, Pacanowski RC, Rosati A (2004) A technical guide to MOM4 GFDL Ocean Group technical report No. 5, NOAA/Geophysical Fluid Dynamics Laboratory, p€339 Guo X, Hukuda H, Miyazawa Y, Yamagata T (2003) A triply nested ocean model simulating the Kuroshio—roles of horizontal resolution on JEBAR-. J Phys Oceanogr 33:146–169 Hasumi H (2000) CCSR Ocean Component Model (COCO) Version 2.1. CCSR Report No.13 Hasumi H (2006) CCSR ocean component model (COCO) version 4.0. Report No. 25. Center for Climate System Research, The University of Tokyo Hickox R, Belkin I, Cornillon P, Shan Z (2000) Climatology and seasonal variability of ocean fronts in the East China, Yellow and Bohai Seas from satellite SST data. Geophys Res Lett 27(18):2945–2948 Hirose N, Kawamura H, Lee HJ, Yoon JH (2007) Sequential forecasting of the surface and subsurface conditions in the Japan Sea. J Oceanogr 63:467–481
438
J. Zhu
Hunke EC, Ducowicz JK (2002) The elastic–viscous–plastic model for sea ice dynamics. Mon Weather Rev 130:1848–1865 Ichikawa H, Beardsley RC (2002) The current system in the Yellow and East China Seas. J Oceanogr 58:77–92 In T, Ishikawa Y, Shima S, Nakayama T, Kobayashi T, Togawa T, Awaji T (2008) A triple nesting approach toward the improved nowcast/forecast of coastal circulation off Shimokita Peninsula (to be submitted) Ishikawa Y, Awaji T, Toyoda T, In T, Nishina K, Nakayama T, Shima S, Masuda S (2009) Highresolution synthetic monitoring by a 4-dimensional variational data assimilation system in the northwestern North Pacific. J Mar Syst 78(2):237–248 Ishizaki H, Motoi T (1999) Reevaluation of the Takano-Oonishi scheme for momentum advection on bottom relief in ocean models. J Atmos Ocean Technol 16:1994–2010 Kalnay E et€al (1996) The NCEP/NCAR 40-year reanalysis project. Bull Am Meteorologic Soc 77:437–471 Kim YH, Chang KI, Park JJ, Park SK, Lee SH, Kim YG, Jung KT, Kim K (2009) Comparison between a reanalyzed product by the 3-dimensional variational assimilation technique and observations in the Ulleung Basin of the East/Japan Sea. J Mar Syst 78:249–264 Larsen J, Høyer JL, She J (2007) Validation of a hybrid optimal interpolation and Kalman filter scheme for sea surface temperature assimilation. J Mar Syst 65:122–133 Leonard A (1979) A stable and accurate convective modeling procedure based on quadratic upstream interpolation. Comput Methods Appl Mech Eng 19:59–98 Leonard BP, MacVean MK, Lock AP (1993) Positivity-preserving numerical schemes for multidimensional advection. NASA Technical Memorandum 106055, ICOMP-93-05 Levitus S, Burgett R, Boyer TP (1994) World Ocean Atlas, vol€3, salinity. NOAA Atlas NESDIS, 3, United States Department Of Commerce, Washington Li XC, Zhu J, Xiao YG, Wang RW (2010) A model-based observation thinning scheme for assimilation of high resolution SST in the shelf and coastal seas around China. J Atmos Ocean Technol 27:1044–1058 Marchesiello P, McWilliams JC, Shchepetkin A (2001) Open boundary conditions for long-term integration of regional oceanic models. Ocean Model 3:1–20 McPhaden MJ, Mayers G, Ando K, Masumoto Y, Murty VSN, Ravichandran M, Syamsudin F, Vialard J, Yu L, Yu W (2009) RAMA: the research moored array for African-Asian-Australian monsoon analysis and prediction. Bull Am Metrologic Soc. 90: 459–480 Mellor GL, Blumberg A (2004) Wave breaking and ocean surface layer thermal response. J Phys Oceanogr 34:693–698 Mellor G, Kantha L (1989) An ice-ocean coupled model. J Geophys Res 94:10937–10954 Mellor GL, Hakkinen S, Ezer T, Patchen R (2002) A generalization of a sigma coordinate ocean model and an intercomparison of model vertical grids. In: Pinardi N, Woods JD (eds) Ocean forecasting: conceptual basis and applications. Springer, New York, pp€55–72 Miyazawa Y, Kagimoto T, Guo X, Sakuma H (2008a) The Kuroshio large meander formation in 2004 analyzed by an eddy-resolving ocean forecast system. J Geophys Res 113:C10015. doi: 10.1029/2007JC004226 Miyazawa Y, Komatsu K, Setou T (2008b) Nowcast skill of the JCOPE2 ocean forecast system in the Kuroshio-Oyashio mixed water region (in Japanese with English abstract and figure captions). J Mar Meteorol Soc (Umi to Sora) 84(2):85–91 Moon I-J (2005) Impact of a coupled ocean wave–tide–circulation system on coastal modeling. Ocean Model 8:203–236 Noh Y, Kang YJ, Matsuura T, Iizuka S (2005) Effect of the Prandtl number in the parameterization of vertical mixing in an OGCM of the tropical Pacific. Geophys Res Lett 32:L23609. doi: 10.1029/2005GL024540 Uppala SM, Kallberg PW et€al (2005) The ERA-40 re-analysis. Q J R Meteorol Soc 131:2961–3012 Oey LY, Chen P (1992) A Nested-Gris Ocean model—with application to the simulation of Meanders and Eddies in the Norwegian Coastal Current. J Goephys Res 97:20063–20086
17â•… Overview of Regional and Coastal Systems
439
Oke PR, Schiller A, Griffin DA, Brassington GB (2005) Ensemble data assimilation for an eddyresolving ocean model of the Australian region. Q J R Meteorol Soc 131:3301–3311 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink Ocean Data Assimilation System (BODAS). Ocean Model 21:46–70 Pacanowski RC, Griffies SM (1999) MOM3.0 manual. WWW Page, http://www.gfdl.noaa. gov/~smg/MOM/web/guide_parent/guide_parent.html Rio MH, Hernandez F (2004) A mean dynamic topography computed over the world ocean from altimetry, in situ measurements, and a geoid model. J Geophys Res 109:C12032. doi:10.1029/2003JC002226 Schiller A, Oke PR, Brassington GB, Entel M, Fiedler R, Griffin DA, Mansbridge J (2008) Eddyresolving ocean circulation in the Asian-Australian region inferred from an ocean reanalysis effort. Prog Oceanogr 76:334–365 Toyoda T, Awaji T, Ishikawa Y, Nakamura T (2004) Preconditioning of winter mixed layer in the formation of North Pacific eastern subtropical mode water. Geophys Res Lett 31:L17206. doi: 10.1029/2004GL020677 Tsujino H, Usui N, Nakano H (2006) Dynamics of Kuroshio path variations in a high-resolution general circulation model. J Geophys Res 111:C11001. doi:10.1029/2005JC003118 Usui N, Tsujino H, Fujii Y, Kamachi M (2006) Short-range prediction experiments of the Kuroshio path variabilities south of Japan. Ocean Dyn 56:1616–7341 Wan L, Zhu J, Bertino L, Wang H (2008) Initial ensemble generation and validation for ocean data assimilation using HYCOM in the Pacific. Ocean Dyn 58:81–99. doi:10.1007/s10236008-0133-x Wang D, Liu Y, Qi Y, Shi P (2001) Seasonal variability of thermal fronts in the northern South China Sea from satellite data. Geophys Res Lett 28(20):3963–3966 Weaver A, Courtier P (2001) Correlation modeling on the sphere using a generalizing diffusion equation. Q J R Meteorol Soc 127:1815–1846 Xiao Y, Zhu J (2007) Numerical simulation of circulations in coastal and shelf sea around China using a hybrid coordinate ocean model. Technical report (in Chinese) Xie S, Hafner J, Tanimoto Y, Liu WT, Tokinaga H, Xu H (2002) Bathymetric effect on the winter sea surface temperature and climate of the Yellow and East China Seas. Geophys Res Lett 29(24):2228. doi:10.1029/2002GL015884 Xie JP, Zhu J, Yan L (2008) Assessment and inter-comparison of five high resolution sea surface temperature products in the shelf and coastal seas around China. Cont Shelf Res 28:1286–1293 Zhu J, Awaji T, Brassington GB, Usuii N, Hirose N, Kim YH, Liu Q, She J, Miyazawa Y, Watanabe T, Ravichandran M (2008) Asia and oceania applications. Proceedings of the final GODAE symposium. Available from the GODAE website, pp€359–372
Chapter 18
System Design for Operational Ocean Forecasting Gary B. Brassington
Abstract╇ The scientific and technical advances in ocean modelling, ocean data assimilation and the ocean observing systems over the past decade have made the grand challenge of ocean forecasting an achievable goal with the implementation of the first generation systems (Dombrowsky et€al. 2009). Implementation of these components into a truly operational forecasting system introduces a number of unique constraints that can lead to reduced performance. These practical constraints, such us the limitations in the coverage and quality of critical components of the ocean observing systems in real-time as well as the constraints of completing forecast integrations within a fixed schedule are unavoidable components for any forecast system and require additional strategies to achieve robustness and maximise performance. We begin by defining commonly used terms such as operational and forecasting in this context. We then review the design choices that can be taken with each component of an ocean prediction system when implemented as an operational system to achieve the most reliable performance.
18.1╅Introduction Operational ocean forecasting systems have been established over the past decade by several agencies and institutions (Dombrowsky et€al. 2009). Hurlburt et€al. 2009 provides an appraisal of key developments over this period. These systems employ a wide variety of techniques (Kamachi et al. 2004; Cummings 2005; Brasseur et€al. 2005; Martin et€al. 2007; Oke et€al. 2005, 2008) largely due to the maturing state of the science. None of these techniques are theoretically optimal as defined by the use of a 4D variational scheme (Lorenc 2003) or an ensemble Kalman Filter (Evensen 2003). However, the computational cost of eddy resolving models which preclude the use of 4DVar and EnKF approaches, together with the poor knowledge of the G. B. Brassington () Centre for Australian Weather and Climate Research, Bureau of Meteorology, Melbourne, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_18, ©Â€Springer Science+Business Media B.V. 2011
441
442
G. B. Brassington
background error covariances that apply in the ocean has led to a wide variety of sub-optimal approaches being employed. Guiding principles for good design can be found in many quotations of which we cite three. The first of these is referred to as the Law of the instrument and is attributed to Abraham Maslow, “When the only tool you have is a hammer, it is tempting to treat everything as if it were a nail”. The law of the instrument is a warning to new scientists and engineers that need to work on improving existing systems that many of the design choices are based on the known methods and techniques at that time. All design choices are constrained by those methods and should be regularly questioned and reviewed. The second quotation is a warning against reductionism and attributed to Albert Einstein, “Make things as simple as possible, but not simpler”. All components of the ocean prediction system contain assumptions that reduce the problem into simpler elements that offer advantages e.g., methods of solution. All assumptions that reduce the parameter space of the system are true under defined conditions e.g., Boussinesq, hydrostatic and incompressible assumptions. A thorough knowledge of these assumptions and the conditions under which they hold is critical when re-applying methods or systems for new applications. Alternatively, all the advantages of new efficient method are of no use if they do not solve the target problem to within a required precision. The third and final quote is the antithesis of the previous quote and again is attributed to Albert Einstein, “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius—and a lot of courage—to move in the opposite direction”. This quote is particularly relevant with present systems as the trend is toward higher model resolutions, more complex data assimilation methods, ensemble forecasting and coupled physical models. It serves to pause and justify before automatically introducing greater system complexity. This trend is scaling with the improvement in computing system performance and is likely to continue. A good analogy for operational ocean forecasting design today is that of the chronometer invented by John Harrison (Sobel 1995). Take a visit to the museum in Greenwich, London and you will see an incredible piece of design/art called H1 (see Fig.€18.1a). This was designed by John Harrison to solve the Longitude problem by
Fig. 18.1↜渀 a The H1 clock and b chronometer designed by John Harrison to solve the longitude problem
18â•… System Design for Operational Ocean Forecasting
443
producing a clock that could perform accurately at sea and claim the significant monetary prize. Anyone cannot help but admire the quality of the design and the achievement of its clock. However, this particular clock was abandoned by John Harrison after 17 years of development as he realised how he could improve to eventually arrive at a pocket sized device called the chronometer (see Fig.€18.1b). Operational oceanography today is analogous to the H1 where it functions as it was designed to, contains many novel and elegant solutions but remains far from where it will be over the coming decades in terms of its techniques and importantly its reliable performance. In this paper, we begin by offering a definition for commonly used terms related to ocean forecasting specifically identify properties unique to operational forecasting. We then provide a short overview of applications for ocean forecasting and common servicing requirements influencing design. Section€18.4 introduces the system elements of an ocean forecasting system which is followed by an expanded discussion on each of these elements with particular emphasis on the properties of each component that influence the system design. This includes, Sect.€18.5 real-time observing system, Sect.€18.6 real-time forcing system, Sect.€18.7 modelling, Sect.€18.8 data assimilation, Sect.€18.9 initiatlization, Sect.€18.10 forecasting cycle, Sect.€18.11 system performance. Throughout we have highlighted aspects of an operational system that require design choices to be made and are of a general interest to system design. By way of demonstration, examples are drawn from specific systems with the cautionary note that these may or may not be general practice. A majority of the examples are drawn from the BLUElink Ocean Model, Analysis and Prediction System (OceanMAPS) which is noted throughout. We then end with a short conclusion.
18.2╅Definitions The initial development of all forecasting systems is performed under hindcast conditions (see Table€18.1). In many respects hindcasts frequently attempt to mimic the forecast environment however, many of the conditions that occur in real-time are difficult to reproduce and are not necessarily normally distributed e.g., drop outs in satellite products (see Fig.€18.2). Alternatively, it is often desirable to determine the statistical performance of a system operating under ideal conditions which sets the upper bound in performance. In practice, this level of performance only ocTable 18.1↜渀 Definitions of terms frequently used in reference to assimilated ocean model states Forecasting terminology Hind-analysis Best estimation using optimal methods and maximum information Hindcast Behind real-time simulation of forecasts i.e., model initialisation from a hindanalysis and model projection Hindcasts are typically performed under ideal conditions and represent an upper bound in forecast performance Nowcast Estimation of the state and circulation at real-time that can be used as a persistence forecast Forecast Prediction of the state and circulation beyond real-time
G. B. Brassington
444
d_sst a_sst
Number
3 .105
2 .105
1.105
0 01/04 2009
11/04
21/04
01/05
11/05 21/05 Date (dd/mm)
31/05
10/06
20/06
Fig. 18.2↜渀 Observation retrievals from AMSR-E (-) ascending and (…) descending swaths obtained at the Bureau of Meteorology between 4th January 2009 and 30 June 2009
curs when the forecast conditions approach the ideal. In the design of a forecast system the performance of the system under less than ideal conditions is of equal importance. This frequently introduces additional strategies to minimise impact to achieve the highest lower bound. For this reason it is critical to use terminology of forecast and hindcast systems appropriately and to define the conditions of the system accurately. The term operational is frequently used with a wide variety of working definitions but interestingly also has a specific philosophical heritage (see http://plato. stanford.edu/entries/operationalism/). A useful working definition was outlined during the develoment of EuroGOOS (Prandle and Flemming 1998). The term as it applies to operational forecasting is summarised here in Table€18.2 as relating Table 18.2↜渀 Definitions for the meaning of operational as they apply to world meteorological agencies Operational Real-time System and products targeting nowcast and forecasts Routine Performs to a regular schedule Robust Technological: High-end computing and communications with designed failovers and fit-for purpose scheduling Scientific: Detect and mitigate changes in system state to ensure minimum impact to performance Consistent Consistently achieving the designed performance
18â•… System Design for Operational Ocean Forecasting
445
to: real-time services delivered routinely and robustly. Many operational centres measure success against the delivery of services 24/7. A considerable amount of resources are expended in order to achieve the level of servicing of 99.99% up time typical of a WMO agency. Consistency of the quality of the services is also critical to design choices.
18.3â•…Applications Prior to designing any system it is important to define the applications to be targeted for the system and to define the service requirements that need to be met. This is critical to both the design of the observing system and forecasting systems. However, operational oceanography has not rigorously followed this idealised approach. Operational oceanography was initiated as an experiment, Global Ocean Data Assimilation Experiment (Smith and Lefebvre 1997) motivated by the opportunity presented by the new and expanding global ocean observing system particularly with the introduction of satellite altimetry. The many sectors that could potentially benefit from ocean forecasting services were more or less known at that time. However, the specific applications and the forecast skill requirements were not known. There are several properties of the applications that will influence the design of ocean forecast systems and the impact of those services are summarised in Table€18.3. These include the type of application, its social or economic value, the sophistication of the user community and the service requirements. A subset of the potential applications are represented in Fig.€18.3. Figure€18.3a, b represents an upwelling event that took place off the Bonney coast in South Australia on the 10th February 2008. Upwelling frequently impacts local marine ecosystems bringing nutrient rich water into the photic zone resulting in a chlorophyll bloom that is observable by ocean colour (Fig.€ 18.3b) on the 30th March 2008. Upwelling can also have a stabilising effect on the local atmospheric boundary layer reducing the transfer of momentum to the surface. Upwelling can occur very rapidly and therefore can be absent from atmospheric forecasts that persist SST boundary conditions. This specific event resulted in a forecast failure where strong winds were forecast but local observers experienced weak winds. This lead to a complaint by local tourist operators who had cancelled ocean cruises. Upwelling events can also be associated with sea fog which is difficult to observe using either infrared radiation (e.g., Advanced Very High Resolution Radiometer (AVHRR)) due to the presence of fog or microwave (e.g., Advanced Microwave Scanning Radiometer— EOS (AMSR-E)) due to the coarse resolution (~25€km/pixel) and interference near coastlines. A dynamical forecast is required to generate the cool SST’s in response to the wind which can be observed however, the precision of the forecast SST’s is difficult to validate. Marine accident and emergency services from ships (Fig.€ 18.3c) and from oil wells (Fig.€ 18.3f) including the airborne and ship-based salvage operations (Fig.€18.3g) are an obvious application of ocean forecasting services. However, the
446
G. B. Brassington
Table 18.3↜渀 Properties of the applications and user communities that impact the design choices in ocean forecasting systems Applications Types Ad hoc time and space (e.g., Search and rescue, Marine accident and emergencies, defense) Planning and management (e.g., fisheries by-catch, marine park management) Engineering/industrial (e.g., offshore oil and gas, ship routing, renewable energy) Global and continuous (e.g., weather, wave, ecosystem forecasting) Coastal shelf (e.g., ports management, bilge discharge, coastal surge) Public good (e.g., recreational fishing, diving, swimming, sailing) Social and economic Life, safety or security threatening value Property damage Marine health Economic value and energy User community Are the users structured and coordinated? sophistication Are the service needs well-defined? Are the impacts of ocean services understood? Capacity to interpret ocean products and add value Capacity to monitor and assess impacts Capacity to engage in a relationship and provide usable feedback Service requirements Hindcasts, Short-, medium-, long-range forecasts Performance thresholds Sensitivity to error Sensitivity to extremes Observational requirements Timeliness and frequency of forecast products
requirements for skilful Lagrangian trajectories has been difficult to achieve. The present and future global ocean observing system over the next decade is unlikely to be sufficient to meet the needs of these applications (Hackett et€al. 2009; Davidson et al. 2009; Rixen et€al. 2009; Brassington et€al. 2010a). A characteristic of these events is that they occur infrequently at ad hoc locations and are localised making them suitable for short term, intense observing deployments through the use of gliders, AUV’s and drifting buoys etc. An atmospheric feature that is common to Australia, Brazil and the United States on their respective eastern coastlines is the formation of rapidly intensifying extratropical cyclones (see Fig.€18.3d). These storms are sometimes referred to as bombs due to their severity and impacts. The event in June 2007 made famous by the grounding of the cargo ship Pasha Bulka also resulted in loss of life due to flooding in Newcastle. On the east coast of Australia these storms form when a cut-off low of cold dry air moves over a warm moist marine boundary leading to vertical convection and a positive feedback in the atmosphere of convergence of the up-
Fig. 18.3↜渀 A collage of applications that require real-time forecast services a forecast SST’s for a coastal upwelling event of South Austrlai. b The ocean colour response for the same vent. c Oil washed on shore off Queensland due to a leak from the Pacific Adventurer. d Modelled 10€m winds of an East coast cyclone off the NSW coast. e The modelled SST conditions associated with the event. f Oil discharge from the Montara oil well. g Ship based salvage operations for the same event. h Surge along the Derwent river in Tasmania and i the forecast sea level for this event
18â•… System Design for Operational Ocean Forecasting 447
448
G. B. Brassington
per layer potential vorticity (McInnes et€al. 1992). The ocean heat content along these coastlines is highly variable due to the turbulent western boundary current that transports warm/fresh water from the tropics to higher latitudes. The modelled SST shown in Fig.€18.3e exhibits a temperature front in the same position as the storm. The warm SST’s were maintained by a warm-core anticyclonic eddy (Brassington 2010; Brassington et€al. 2010b). An ocean forecast system can provide forecasts of the heat content conditions with potential for use in coupled forecasts. High sea level along the coast is typically associated with the coincidence of tides and storm surge. Forecasting systems are typically based on so-called “stormsurge” models local to the event to estimate risk in combination with tides and sea level pressure. Simulations of non-tidal sea level in ocean forecasting systems can also be impacted by other oceanographic effects of remote coastally trapped waves and impinging warm boundary currents. For example, a high sea level event in the Derwent river (see Fig.€18.3h) resulted from a local storm and a large amplitude coastal trapped wave propagating from South Australia. A characteristic of the coastal trapped wave is the high sea level in the Bass Strait. (see Fig.€18.3i). Regional forecasters did not issue a warning due to their use of traditional methods of computing sea level which did not account for the remote contribution. Ocean forecast systems have the potential to provide total sea level forecasts. In each of these applications the oceanographic conditions play an important role for which accurate forecasts can provide valuable information. Detailed analysis of these and other similar cases can identify the relevant oceanographic variables and the sensitivity to error to derive the requirements in terms of performance. In these examples, SST, heat content, surface currents and sea level are directly relevant which accounts for four of the five prognostic variables in a hydrostatic ocean general circulation model. Though it is important to note that their forecast are dependent upon the knowledge of all prognostic variables. National agencies and institutions are regularly engaged with local users and have opportunities to acquire this information. The JCOMM Expert-Team on Operational Ocean Forecast Systems (ET-OOFS) is tasked with providing international coordination to generalise this information into observational and service requirements.
18.4╅System Elements All operational ocean forecasting systems available today follow a similar sequential and cyclic structure which involves handling of the latest observational data, performing a model-data fusion, performing a model forecast to generate data products including ocean state estimates, performance diagnostics and error estimates. This sequential procedure is repeated on a regular schedule or performed in an ad hoc basis e.g., triggered by a specific event. The system diagram for the BLUElink OceanMAPS system is shown in Fig.€ 18.4. This includes retrieval and archival storage of observations, surface fluxes, model and data assimilation dependent data files.
ODAS data retrieval system
Profile data archival system
OGCM data retrieval system
Bureau operational NWP products
Satellite data retrieval system
Bureau operational SST products
Database/Archive
Satellite data archival system
Profile data retrieval system
Surface flux retrieval system
449
Servers and GFS file system
18â•… System Design for Operational Ocean Forecasting
ODAS
Super computing
OGCM
Super computing ODAS product archival system
Data servicing system
OGCM product archival system
Fig. 18.4↜渀 A schematic diagram of the system elements for an operational ocean forecasting system. (Based on the BLUElink OceanMAPS, Brassington et€al. 2007)
A Supervisor Monitor Scheduler (SMS) developed at the European Centre for Medium-Range Weather Forecasts (ECMWF; see http://www.ecmwf.int/products/ data/software/sms.html) or equivalent software, is implemented at operations centre, to control the job flow monitoring the successful completion of dependent system components. The data and file handling is performed on servers whilst the large memory and computationally intensive tasks for data assimilation and model integration are submitted to high-end super-computing systems. The performance of the computing environment and the level of optimisation that can be achieved with the software is critical to design of ocean forecasting systems. Eddy-resolving ocean forecast systems are at the high-end of supercomputing application both for the prognostic model and the data assimilation inversion. The total wall clock time and the computing resources available in an operations centre are limited and managed among several other forecast systems. The efficiency of the software and the consistency of the completion times for different components has important impacts on design. For example, the computational cost of a data assimilation system will scale with the size of the inversion problem. Targeting a reduction in cost may compromise the number of observations processed through super-observations or thinning, require the implementation of localisation or limit the specification of the
450
G. B. Brassington
background error covariance. Similarly the cost of ocean model design scales with the number of grid points/cells and the timestep constraint for numerical stability. Targeting a specific cost limit will compromise the horizontal/vertical resolution within the model or the area of high resolution. The system described above accounts for the majority of the science and technical design for an ocean forecasting service. However there are several steps to the provision of a quality service to end users. These include infrastructure for robust data product dissemination, forecaster guidance as well as support services for specifying user requirements and evaluating impacts. These important steps are not discussed further here.
18.5╅Real-Time Observing System The global ocean is now observed by a growing number of instruments and platforms that each have specific properties, some common and some unique, that will impact the design in operational ocean forecasting. These properties are summarised in Table€ 18.4 and include the timeliness, coverage, expected errors and quality. The relative immaturity of ocean instrumentation and infrastructure leads to more frequent system failures in practice compared with numerical weather prediction. System failures are frequently random and unpredictable though the sensitivity of the forecasting system to failures in the observing system are measurable. Strategies to minimise the impact need to be considered in the system design. For a more detailed discussion on aspects of the ocean observing system refer to Le Traon (2011) and Ravichandran (2011).
18.5.1 In Situ—Profiles The ocean state is routinely profiled in real-time by Conductivity-TemperatureDepth (CTD) sensors from traditional platforms such as ships and moorings and Table 18.4↜渀 Properties of the real-time ocean observing system that result in unique design choices in ocean forecasting systems Real-time bserving system Timeliness How close to real-time are the observations received? Are delayed products available with higher quality? Coverage What is the minimum/maximum coverage? How homogeneous is the coverage? Observation error estimation Instrument error Representation error Quality control Does the product include quality flags? Valid tests for the observation error model Non-normal behaviour Instrument failures, communication and system failures
18â•… System Design for Operational Ocean Forecasting
451
relatively new platforms such as autonomous Argo floats and gliders, from volunteer ships. In addition eXpendable Bathy-Thermograph (XBT) are operated from volunteer ships and reported in real-time. The sampling by in situ measurements has significantly increased over the past decade and coverage has increased in regions that have been historically poorly sampled such as the Indian Ocean and Southern Ocean. The Argo array is now the dominant source of in situ sampling having largely achieved its target density of one float per 3°â•›×â•›3° over the global ocean withâ•›>â•›3000 autonomous floats. Each float profiles the ocean water column from ~2000€m to the surface every 10 days reporting in real-time at the surface via ARGOS or Iridium (Roemmich et€al. 2010). A user guide to the range of Argo data products available and the server access points are given online (http://www.argo.ucsd.edu/Argo_Date_Guide. html). Observations are retrieved by a network of Data Assembly Centres (DAC’s) which are responsible for performing an automatic quality control procedure and distributing the observations to both the WMO Global Telecommunication System (GTS) and the two Global DAC’s (GDAC’s). The DAC’s also perform an objective quality control in delayed mode. Profiles that pass the automatic quality control are reported in real-time to the GTS without quality control information in TESAC format. A fast mode product is available from GDAC servers within 3 days in a format that contains the quality control flags and native observations on pressure coordinates. Other important CTD profiles are obtained from the mooring arrays, TAO/TRITON (Pacific; McPhaden et€al. 2001), PIRATA (Atlantic; Bourles et€al. 2008) and recently RAMA (Indian; McPhaden et€al. 2009). These moored arrays report in real-time, multiple times per day and are reported onto the GTS. Increasingly gliders are being used to adaptively sample the ocean however, the data acquisitions are as yet not coordinated internationally in the same way as Argo and lack a common real-time quality control procedure, integration with the GTS and other DAC/GDAC product delivery. XBT’s have been maintained along specific ship routes and sampling is constrained by the frequency of the volunteer ships that occupy the route (Goni et€al. 2010). XBT’s provide high vertical resolution profiles of temperature and depth at regular spacing along the ship route. Profiles, subsampled in the vertical, are reported on the GTS without quality control flags. The profiles are subsequently subjectively quality controlled by a number of centres using a common set of procedures (Bailey et€al. 1994). As an example the number of profiles that were retrieved at the Bureau of Meteorology each day from the GTS and the two Argo GDAC’s (Coriolis and USGODAE) between the 15th January 2010 and 1st March 2010 are shown in Fig.€18.5. The GTS reports consistently ~1200 profiles per day although the most recent retrievals show an increase in the number of observations due to shallow coastal observations in the USA. The GDACS’s report on average 300 profiles per day corresponding to the expected number of Argo floats surfacing each day. The number of profiles retrieved from each GDAC do not correlate and are clearly not simply a mirror site. Coriolis also frequently has bursts of profiles which largely contain old profiles that a DAC has subjectively QC’d. The best daily observations available in near real-time are obtained by sorting amongst the three sources. Ideally the three sources should contain a maximum of three duplicates for the same profile that must
452
G. B. Brassington
,QVLWX3URILOH
PPW JRGDH FRULROLV JWV
1XPEHU
Fig. 18.5↜渀 The number of ocean profiles retrieved daily between 15 January 2010 and 1st March 2010 from the GTS (↜purple), Coriolis (↜blue) and USGODAE (↜green) and the number of duplicate free profiles (↜red)
be reduced to one. The best profile is determined to be the one that has both the most complete set of observations and the maximum set of quality control tests applied. The number of profiles obtained for each day from the duplicate checking procedure is shown in Fig.€18.5 in red. The best daily observations provides consistently ~1200 profiles per day. The decline in profiles near real-time shows the impact of timeliness of the profiles with a small percentage of the total profiles obtained several days behind real-time. An algorithm developed at the Bureau of Meteorology (Brassington et€al. 2007) to select the best profiles replaces profiles obtained from the GTS with more complete profile information, particularly quality control, obtained from the GDAC’s. A typical example of the timeliness, volume and source of the profiles obtained from that system is shown in Fig.€18.6 for the 13th September 2009. Within the first two days of real-time, the number of profiles is dominated by those obtained from the GTS. Within the first day GTS profiles are being replaced by profiles from the GDAC’s. In the 3rd and subsequent days profiles from the GDAC’s continue to replace those obtained from the GTS. The number of profiles replaced declines as the time behind real-time increases.
18.5.2 Satellite SST Sea surface temperature is the most frequently observed ocean state variable by satellites with multiple sensors and multiple orbits. Microwave sun-synchronous and IR Geostationary platforms provide higher coverage whilst the IR polar orbiting missions provide the highest resolution and accuracy in cloud free conditions. There
18â•… System Design for Operational Ocean Forecasting
453
Fig. 18.6↜渀 The profiles received in real-time on 13th September 2009 that represent the best profile available from previous retrievals and the source of the profile
are several known limitations to the use of observed SST for ocean forecasting with diurnal warming and skin effects. Specific algorithms are required to perform quality control relevant to the foundation temperature (refer to the online definitions maintained by the Global High Resolution Sea Surface Temperature (GHRSST) science team, http://www.ghrsst.org/SST-Definitions.html). Foundation temperature specifically refers to the near surface temperature of the ocean excluding diurnal skin effects. In practice, observations are withheld from the analysis as being impacted by diurnal skin effects based on the time of day and the magnitude of the 10€m winds as a proxy for near surface mixing (Donlon et€al. 2002). The algorithms do not attempt to correct the temperature values for any diurnal effects, therefore the day time temperatures will include a small residual bias. Night-time observations of SST are also affected by a cool skin effect however this is a relatively small perturbation compared with daytime biases. Algorithms use a smaller constraint on atmospheric winds resulting in greater coverage. Therefore the night-time foundation SST’s represent a more robust estimate and offer greater coverage compared with day-time products. The majority of ocean forecasting systems at present do not explicitly represent the diurnal skin layer which requires a vertical resolution <1€m. Therefore the temperature in the top model grid cell as well as the statistical covariance over the surface layer are compatible with foundation temperature products. It should however be noted that some ocean models are implementing finer surface resolution in order to represent a portion of the diurnal variability throughout the forecast. Such models require a more sophisticated strategy to constrain the diurnal variability with observations.
454
G. B. Brassington
Microwave sensors observe SST through clouds but not precipitation and therefore offer improved coverage over infrared away from the inter-tropical convergence zones. Microwave bands reduce the resolution of SST observations to ~25€km/pixel for AMSR-E. This resolution is comparable to (approximately half) the resolution of present forecast system grids. However, the interference from land boundaries reduces performance within two pixels (or ~50€ km) of the coastline. AMSR-E does not observe that part of the continental shelf with the highest temperature variability and offers limited coverage for Straits and Gulfs. The orbit of AMSR-E on TERRA is sun-synchronous providing a day-time (ascending) and night-time (descending) equator crossing for swath observations. The percentage of observations from AMSR-E that provide valid foundation temperatures is shown in Fig.€ 18.7. The night-time observations Fig.€ 18.7a, c provide greater coverage compared with day-time observations Fig.€ 18.7b, d as expected for both Austral summer and winter. Note that there is a specific swath line that appears to offer lower coverage but is an artefact of the difference between the period of the satellite orbit >24€h and the period of one earth orbit 24€h. Both descending and ascending swaths show reduced coverage over the inter-tropical convergence zones and monsoons, though the position of these changes with season. In the high latitudes, SST coverage is near 100% up to the ice edge where the atmospheric conditions are of high winds and dry air. Foundation SST from AMSR-E must remove all pixels
Fig. 18.7↜渀 Percentage of days observed by AMSR-E for Austral seasons and ascending(asc)/ desending(desc) orbits a summer, desc, b summer, asc, c winter (desc) and d winter (desc)
18â•… System Design for Operational Ocean Forecasting
455
contaminated by precipitation up to a chosen threshold. In some applications, where maximum coverage is essential a higher threshold can be used. However, for ocean forecasting (foundation temperature) a more conservative approach is important. The so-called Level-2 Pre-processed (L2P) product (refer to http://www.ghrsst.org/ L2P-Observations.html) provides all of the necessary fields to diagnose and select the threshold for ocean forecast applications. The NOAA AVHRR series has been sustained as an operational platform with wide-swath infrared sensors and multiple satellites in sun-synchronous orbits. NAVOCEANO provide a merged, foundation temperature, swath L2P product available in near real-time. The resolution ~1€km is greater than that of current and near future ocean forecast systems. This permits the construction of super-observations (e.g., Lorenc 1981; Purser et€ al. 2000) that have reduced representation error increasing the weighting in the analysis. The higher resolution also provides observations over the continental shelf and Gulf regions compared with microwave sensors. An observation error for the foundation temperature can be constructed to account for residual diurnal signals based on the time from nearest local night-time as well as an age penalty for time from the analysis time (Andreu-Burillo et€al. 2009).
18.5.3 Satellite Altimetry Remotely sensed satellite altimetry observes a broad spectrum of dynamical processes including: tides, wind-waves and swell and steric anomalies. Steric anomalies relate to the changes in height from the vertical integral of specific volume anomalies from the background. Vertically coherent specific volume anomalies are prominent in ocean eddies where they can have relatively warm and/or fresh cores relative to the surrounding ocean state leading to positive height anomalies or relatively cool and/or salty cores leading to negative height anomalies. Analyses of merged altimetry have revealed that 50% of the variability of the world ocean is accounted for by eddies with height anomalies of 5–25€cm and diameters 100– 200€km (Chelton et€al. 2007). The speed of propagation for the majority of eddies is found to range from 2.5 to 12.5€cm/s with a westward propagation ±10° (Chelton et€ al. 2007). In regions where the geostrophic turbulence is more active such as near western boundary currents the eddy propagation speeds can transiently exceed 40€cm/s (Brassington 2010) and can develop height anomalies in excess of 25€cm and diameters in excess of 200€km (see Fig.€18.8). Recovering sea surface height anomalies from satellite altimetry requires precise estimation of a large number of corrections (Chelton 2001). For example the ssha is recovered from Jason1 by the following equation (Desai et€al. 2003).
ssha = (orbit − (range_ku + iono + dry + wet + ssb)) − (mss + setide + otide + pole + invbar) + bias
(18.1)
where range_ku refers to the range delay for the Ku-band and iono, dry, ssb refer to range corrections for the ionosphere, dry/wet troposphere and sea state bias.
456
G. B. Brassington
Fig. 18.8↜渀 a An example of on d 98.55 ay of altimetry passes from Envisat, Jason1 and Jason2 for the 1st January 2010, in the Australian region. b ±2 days of altimetry passes about the 1st January 2010 overlaying the corresponding background sea level anomaly in the Tasman Sea from OceanMAPS. c same as b but for ±5 days
The terms mss, setide, otide, pole and invbar refer to geophysical effects of mean sea surface, solid Earth tide, ocean and load tide, pole tide, and inverse barometer response. Bias is a correction term resulting from calibration of the orbits. The mean sea surface or geoid is estimated by the time mean of orbit tracks repeated for several years to a precision of 1€km. It is for this reason that the repeat missions Jason1 and Jason2 to TOPEX-Poseidon are put into the same orbits (Robinson 2006). Ocean tidal harmonics are known and can be estmated to high precision with inverse methods (Le Provost 2001). The errors attributed to the TOPEX-Poseidon, Jason class missions is 3€cm, ERS, Envisat and Sentinel missions is 6€ cm and GFO is 10€ cm (Robinson 2006). The precision that can be achieved by the merger of the Jason class mission and ERS missions is 5€ cm (Ducet et€al. 2000). Future altimetry missions from the HY-2 series from China, SARAL for the Ka-band altimeter (Altika) from an Indian consortia and Cryosat have as yet unknown errors but are able to obtain improved errors through calibration against the Jason series. All altimeters launched to date have been nadir-viewing instruments. The spatial and temporal scales that are resolved by these missions are then determined by the spatial and temporal coverage offered by the satellite orbit. It is essential for many of the corrections that a non-sunsynchronous, repeat orbit pattern be used. The repeating polar orbits used are a trade-off between the period between repeat orbits, the equator separation between adjacent passes and the latitudinal range (inclination). The Jason series have a repeat orbit of 9.92 days and a pass separation of
18â•… System Design for Operational Ocean Forecasting
457
156.6€km (254 passes/cycle) and a latitudinal range of ±66.15°. The ERS/Envisat/ Sentinel series use a retrograde sunsynchronous repeat orbit period of 35 days, a pass separation of 79.9€km (501 passes/cycle) and a latitudinal range of ±81.45° (inclination 98.55°). The combination of multiple satellite missions is critical to providing improved temporal and spatial coverage to support SLA analyses and operational ocean forecasting (Ducet et€al. 2000; Pascual et€al. 2009). At present we have Jason1 and Jason2 in a tandem orbit and Envisat delivering near real-time products. An example of the passes obtained for a single day (1st January 2010) in the Australian region from these three missions is shown in Fig.€18.8a. The coverage per day is sparse compared with the spatial scales of the error covariances used in ocean forecasting which scales with the order of eddies, 100€km (Oke et€al. 2005, 2008; Martin et€al. 2007; Brasseur et€al. 2005; Cummings 2005). A larger observation window is employed in all operational systems in order to increase the spatial coverage and improve the quality of the least squares analysis. Examples of the coverage or a 5 day window and 11 day window overlayed on a background of SLA from the OceanMAPS system for the Tasman Sea is shown in Fig.€18.8b, c. A 5 day window shows gaps in coverage that are comparable or larger than the spatial scale of the ocean eddies. An 11 day window provides full coverage from Jason1 and Jason2 and partial coverage from Envisat and offers spatial coverage that is comparable to the scales of the ocean eddies (see Fig.€18.8c). The average altimetry coverage in the Australian region has been estimated for 1°â•›×â•›1° bins for single and multiple missions available in near real-time (see Fig.€ 18.9). The along-track observations have been normalised by thinning to a sampling rate of ~1 observation per 50€km which corresponds to a skip of 8 for Jason1 and Jason2 (i.e., 8â•›×â•›5.78€ kmâ•›~â•›46€ km) and a skip of 6 for Envisat, (i.e., 6â•›×â•›7.53€kmâ•›~â•›45€km). The thinning can be interpreted as the scale that might be used to construct so-called super-observations (e.g., Lorenc 1981; Purser et€ al. 2000). This is a formal method for compacting observations to reduce the redundancy of the raw observations relative to the target scales which in this case is chosen to be 1°â•›×â•›1° bins. Super-obs have a number of beneficial properties in practice including: increasing the homogeneity of coverage, reducing the observation space (i.e., computational cost) improve the condition of the matrix inversion in an analysis (see Daley 1991, p.€111). The average coverage obtained by the multi-satellite missions is a function of the orbit properties described earlier. In practice the coverage is also impacted by periods of communication failures and satellite manoeuvres or equipment failovers. This is evident in the coverage for Envisat (see Fig.€18.9b) which is impacted by the loss of satellite passes during maintenance between the 12th and 27th November 2009 (approximately half a repeat orbit period). The average coverage obtained from Jason1, Jason2 and Envisat (see Fig.€18.9d) over the open ocean ranges between 0.2 and 0.7 observations per 1°â•›×â•›1° bin per day with the mean coverage ~0.44. The coverage in the coastal regions is reduced in all cases and is effected by the quality control of observations and the proportion of the 1°â•›×â•›1° bin that is seawater. The average coverage of Jason1 (see Fig.€18.9a) does not exceed ~0.45. The
458
G. B. Brassington
Fig. 18.9↜渀 The average SLA observations per 1°â•›×â•›1° bin per day over the period 1 January 2009 to 1 March 2010 obtained from a Jason1. b ENVISAT. c Jason1 and Jason2 and d Jason1 and Jason2 and ENVISAT. The along-track observations have been normalised to ~1 observation/50€km
tandem mission of Jason1 and Jason2 (see Fig.€18.9c) show the overall improvement in the spatial distribution of coverage compared with Jason1. The normalized frequency distribution of SLA observation coverage corresponding to each Fig.€18.9a–d is plotted in Fig.€18.10. Due to the relatively coarse orbit sampling of Jason1, the 23% of 1°â•›×â•›1° bins are not sampled at all. With the introduction of the tandem missions Jason1 and Jason2 the number of 1°â•›×â•›1° bins that are not sampled drops to ~8%. The Envisat mission samples virtually all of the bins. The mode of each distribution curve is (0.15; 0.2; 0.35; 0.5) obs. per bin per day for Envisat; Jason1 (ignoring the zero peak); Jason1 and Jason2; Jason1, Jason2 and Envisat respectively. The number of obs. per day for all bins never exceed 0.73 for the three altimeters. The distribution shows that 50% of bins in the Australian region have a coverage of better than (0.15; 0.15; 0.3; 0.45) obs. per bin per day for Envisat; Jason1; Jason1 and Jason2; Jason1, Jason2 and Envisat respectively. Sea level anomaly products are processed in two to three modes dependent on the satellite which vary in quality and timeliness. The quality is determined by the
18â•… System Design for Operational Ocean Forecasting
459
Frequency distribution of altimetry observations
0.45
Jas1 Env
0.4
Jas1, Jas2, Env Jas1, Jas2
0.35
Frequency
0.3 0.25 0.2 0.15 0.1 0.05 0
0
0.1
0.2
0.3 0.4 0.5 Number of obs / 1°x 1° bin / day
0.6
0.7
0.8
Fig. 18.10↜渀 Normalized frequency distribution of altimetry observations per 1°â•›×â•›1° bin per day for the Australian region and satellite combinations shown in Fig.€18.9
quality to which the Geophysical Data Record (GDR) is estimated as well as the precision of other correction terms. Precise orbit positions are determined some time after real-time (e.g., 60 days) and are only relevant to hindcasting. Interim GDR (IGDR) target a faster orbit determination that is less accurate but can be delivered within 2–3 days (Jason series) and 3–5 days (Envisat). For the Jason series additional on-board satellite instrumentation allow an Operational GDR (OGDR) product to be delivered within 24€h of real-time. Due to instrument failure on Jason-1 the OGDR was unavailable but has been restored on the AVISO server. Following the launch of Jason-2 this product is now also available. A summary of events related to operational satellite altimetry can be found online from AVISO (http://www.aviso.oceanobs.com/no_cache/en/data/operational-news/index.html). In summary, the complete orbit of the Jason1 and Jason2 IGDR product is available between 3 and 12 days behind real-time, the complete orbit of Envisat IGDR product is available between 5 and 40 days behind real-time and Jason2 OGDR product is available 1–10 days behind real-time. Due to the reduced quality of the IGDR and OGDR products as well as the timeliness of the products it has been determined that the analysis performance from four real-time altimeters is equivalent to two delayed mode altimeters (Pascual et€al. 2009).
460
G. B. Brassington
18.6â•…Real-Time Forcing System The ocean is a forced dissipative system, where the forcing is largely through the flux of mass, heat and momentum through the air-sea interface. Atmospheric fluxes are available from operational numerical weather prediction systems that are mature and provide robust and consistent performance. However, the performance of atmospheric fluxes is relatively low compared with the state variables due to the limited direct observations of fluxes and errors in boundary conditions. The properties that influence the selection of atmospheric flux products and flux parameterizations for ocean prediction is summarized in Table€18.5. The oceans relatively large inertia, thermal inertia compared with the atmosphere mean that on short timescales air-sea fluxes are a relatively small perturbation to the ocean state at the surface and decays with depth. Even under extreme conditions, such as tropical cyclones, the surface temperature in the cold wake has been observed to be between 1°C and 6°C (Price 1981) and that the majority of the temperature change is due to entrainment and mixing of the ocean water masses in response to the momentum fluxes rather than changes due to surface heat flux. The momentum flux local to the atmospheric winds is largely transferred into the gravity waves which radiate from the source region. Local momentum transfer from high winds occurs through Langmuir circulation (McWilliams et€al. 1997), wave breaking (Melville 1996) and wave dissipation which persist during the wind event and is a function of wave age (Drennan et€al. 2003). A large fraction of the energy radiates away and dissipates through small scale turbulence and topographic interactions in locations remote from the winds. In the coastal region, the reduced volume of seawater is more sensitive to atmospheric fluxes. Storm-surge and coastal trapped waves (e.g., coastal Kelvin waves) are a result of horizontal mass flux into the coast as an Ekman response to the applied wind stress and lower atmospheric pressure (see Fig.€18.3h, i). Coastal upwelling of more dense, often cool and nutrient water masses are a response to mass flux away from the coast from an applied wind stress in the opposing direction (see Table 18.5↜渀 Properties of the atmospheric flux products that impact the ocean forecasting system Real-time forcing system Real-time surface fluxes Robust, well-defined and consistent performance Period, resolution and region of forecast systems Global, regional, sub-regional Forecast skill curve Boundary conditions, persisted SST, surface roughness Land-sea-ice masks Atmospheric boundary layer, cloud and radiation physics Observational constraints (e.g., scatterometry) Hindcast fluxes Performance during data assimilation Flux parameterisation Fixed boundary condition flux products Forecast atmospheric state with dynamic ocean boundary conditions Coupled air-sea or air-wave-sea Ocean dynamics Sensitivity of the ocean state to surface fluxes Sensitivity of ocean forecast error to surface flux errors
18â•… System Design for Operational Ocean Forecasting
461
Fig.€18.3a, b). The coastal region also has less heat capacity due to its reduced depth and is more sensitive to diurnal warming. The coast is also a region where atmospheric precipitation collects over land basins and can discharge from river mouth as a less dense freshwater plume. All of these processes have timescales comparable to those of the atmospheric weather and can produce observable changes to the ocean state and circulation of the coastal region. The skill of coastal ocean state forecasts is therefore sensitive to the skill of the atmospheric fluxes. The atmospheric fluxes for ocean forecasting systems are obtained from operational numerical weather prediction systems (e.g., GASP; Seaman et€al. 1995). A typical configuration for NWP is to perform an analysis every 6€h and a forecast every 12€ h. During ocean hindcasting, 24€ h of analysis fluxes can be composed of four 6€ h analyses. Common averaging periods for surface fluxes are 3€ h and 6€h. Atmospheric forecasting is typically composed of a suite of global and multiply nested regional prediction systems. In general, the horizontal resolution of the atmospheric models are coarser than the comparable ocean model and require regridding. One of the key discrepancies between models of differing resolutions is the mismatch in land-sea mask. A comparison of the land-sea masks from GASP (0.75°) and Ocean Forecast Australia Model (OFAM; Schiller et€al. 2008) (0.1°) is shown in Fig.€18.11. There are specific areas that show where some area correspond to New Land (Sea mask in the source and land in the target) or New Sea.
Fig. 18.11↜渀 A comparison of the land-sea masks of GASP (Seaman et al.) and OFAM (Schiller et al.). The four combinations both land (↜brown), both sea (↜blue), GASP land/OFAM sea (↜yellow) and GASP sea/OFAM land (↜red) ignoring ice masks in the Australian region
462
G. B. Brassington
In general, the magnitude of atmospheric fluxes across the land-sea boundaries is discontinuous largely due to the change in surface roughness, albedo and heat capacity. The magnitude of discontinuities varies with each variable and with the time of day. As the coastal ocean state is sensitive to atmospheric fluxes the fluxes over land need to be explicitly removed. Replacing the fluxes into New Sea locations is commonly performed by a Laplacian operation with the boundary conditions of the fluxes over sea points as this is computationally inexpensive. This method does not apriori preserve the alignment of winds or other properties with the coastline. There are many software packages that perform regridding including many of the earth system couplers (e.g., OASIS, Redler et€ al. 2010) however it is important to test these schemes and not assume that they will satisfy the requirements. An important property for regridding is to conserve the total integral of the field from the source grid to the target grid. The OASIS coupler has implemented the Spherical Coordinate Remapping and Interpolation Package (SCRIP; Jones 1999) as a regridding option. Another simple approach is to use an integral variable where the control volume integrals Eq.€18.1a are summed to form the discrete integral variable Eq.€18.1b, (Leonard 1995) which is exact at each cell interface and implicitly conserves the fluxes on the source grid. xi+.5 ¯ (18.1a) φi = φdx, i ∈ [1, I] xi−.5
ψj =
0 ψj −1 + x φ¯ j
j =0 j ∈ [1, I ]
(18.1b)
Regridding the original cell volume to a finer grid resolution, ∆X with an index kâ•›∈â•›[1,K] with the constraint that I∆xâ•›=â•›K∆X, can be performed by constructing an equivalent integral variable Ψk through interpolation of the integral variable as, 0 k=0 (18.2a) k = interp(ψ) k ∈ [1, K] The cell average values are then recovered as,
φ¯k =
k − k−1 , k ∈ [1, K] . X
(18.2b)
The exact integrals of the discrete integral variable are advantageous when ∆x is chosen to be an integer multiple of ∆X (i.e., ∆xâ•›=â•›n∆X) such that a subset of Ψk are equivalent to j. This formulation has been expressed for a uniform grid in Cartesian coordinates but this method is readily extended to non-uniform grids and other orthogonal curvilinear coordinate systems. It is also noteworthy that a centre point value is a second-order accurate estimate of a averaged over the cell volume, (Sanderson and Brassington 1998) so that the above method can be applied to most ocean general circulation models. An example of longwave radiation flux for the region surrounding Tasmania (Fig.€18.12a) for GASP. This field shows significant variability in the small scales
18â•… System Design for Operational Ocean Forecasting
463
Fig. 18.12↜渀 Longwave radiation heat flux from the GASP forecast a native resolution and b regridded to target resolution
as some of the physics is computed by a 1D radiation scheme but also shows front systems that have a step-like structure in the coarse resolution model. Regridding of coarse resolution information to finer resolution needs an algorithm that de-aliases. The integral variable can be used to de-aliase by performing interpolation for a subset of j such that jâ•›∈â•›[0:m:J] where J is an integer multiple of m. De-aliasing in multiple dimensions can be achieved by iteratively applying the integral variable interpolation in each dimension. The regridded longwave radiation onto the OFAM grid (Fig.€18.12b) is performed by applying the integral variable method for grid refinements of nâ•›=â•›~2 and a dealiasing parameter of mâ•›=â•›2 successively alternating in each dimension. The target grid resolutions are ∆xâ•›=â•›0.4°, 0.2° and 0.1°. Accurate direct observations of fluxes and flux budgets are sparse in time and space. The scatterometer provide an instantaneous estimate of stress which in practice has a limited weighting and impact to atmospheric analyses and forecasts. The monitoring of SST from multiple satellites and sensors together with the Argo and mooring arrays provide a basis for diagnosing errors in flux parameterisations. Atmospheric forecast errors grow rapidly and are constrained through data assimilation commonly on a 6€h cycle. Numerical weather prediction systems provide three alternative strategies for computing fluxes for ocean forecasting: (a) prescribed fluxes, (b) re-estimate the fluxes and (c) coupling. Numerical weather prediction systems currently persist SST analyses, may or may not have a dynamic surface roughness from a wave model and assume the ocean currents are negligible which will lead to a deterioration in skill in the surface fluxes in the forecasts. The next level of sophistication is to use the prescribed atmospheric state variables and replace the ocean boundary conditions by the forecast conditions using a bulk formula method (e.g., Large et€al. 1997). Two specific flaws to this approach include (a) the near-surface atmospheric state variables in the forecast have been forecasted using boundary layer turbulence models based on the original boundary conditions and (b) the ocean boundary conditions for SST may be less accurate or have greater bias than persisted SST. This is presently the case for the BLUElink OceanMAPS system compared with the RAMSSA (Beggs et€al. 2006). In part this is because the background errors from
464
G. B. Brassington
a forecast model are more difficult to define and the analysis for OceanMAPS is multi-variate and by definition will not fit the same SST observations as a univariate analysis. As the ocean forecasting systems continue to mature the performance gap is expected to close. This is also expected to be critical to achieve before more complex solutions of earth system coupling will yield the performance gains in operations (Brassington 2009).
18.7â•…Modelling The governing equations for the ocean are an extension of the Navier-Stokes for a thin layer on a rotating planet. The ocean state equation is an empirical formulae dependent on temperature, salinity and pressure. There are a number of assumptions that can be introduced to simplify the governing equations that exploit the properties of the ocean such as incompressible, hydrostatic which are either convenient for analytical, numerical or data analysis. Software designed to solve these governing equations are referred to as ocean general circulation models (OGCMs). A summary of the design choices in OGCM’s is summarised in Table€18.6. The prevalence of
Table 18.6↜渀 Properties of ocean modelling that result in unique design choices in ocean forecasting systems Ocean modelling Selection of model code Compressible/incompressible Hydrostatic/Nonhydrostatic, Non-Boussinesq/Non-Boussinesq Vertical coordinate system Community models NEMO, HYCOM, ROMS, MOM, … Non-eddy, eddy permitting eddies are ubiquitous in global ocean and eddy resolving Focii, horizontal mesh 0.1 a minimum Geostrophic turbulent closure and submesoscale High order, conservative advection schemes Coastal and bathymetric Vertical/Horizontal control Bathymetry products Practical bathymetry tuning Explicit tides or parameterised (more an assimilation challenge) Boundary conditions Open boundaries, radiation conditions Nonhydrostatic/Hydrostatic (Lattice-Boltzmann methods) Nesting 3:1, alignment of grids, common interfacial bathymetry Explicit/Implicit Numerical methods A-grid, B-grid, C-grid (Arakawa) and computational Order accuracy of methods performance Numerical stability Parallelism and scalability Turbulent parameterizations Surface and bottom boundary layers Tidal mixing Diapycnal mixing
18â•… System Design for Operational Ocean Forecasting
465
community ocean models mean that the first design choice is to select a community model. Community models have already made several design choices on the governing equations as well as to make some choices optional. Many ocean models could be categorised by their primary applications, climate modelling, coastal modelling however many community models aspire to be applicable to multi-scale modelling. It is important to be aware of these design choices and their potential impact to performance and range of applications. Starting from the position that a community model (e.g., Modular Ocean Model version 4, MOM4; Griffies et€al. 2003) has been selected the first step to implementation is to compile and configure the environment for the software on the system architecture. It is then important to optimise the performance and diagnose the scaling. This is a specialist area that can be architecture and compiler specific and is not discussed further. The next step in development is to define the model grid making use of the latest bathymetry products (e.g., Smith and Sandwell 1997). The target resolution in ocean forecasting is for eddy-resolving which is approximately <1/8°. On a global scale this is expensive and would require the latest in high performance computing systems. Alternative approaches are a nested strategy (coarse global, fine regional) or an adapted grid in a single model. The horizontal grid for the Ocean Forecast Australia Model (OFAM; Schiller et€al. 2008) version 2, uses a single global model with higher resolution (0.1°â•›×â•›0.1°) in the Australian region, 90E–180E and 75S–16N (see Fig.€18.13). Provided the grid transitions are performed smoothly to minimise gravity wave reflections and energy accumulation this strategy avoids the problems of nesting and open boundary conditions. The convergence of meridions towards the poles means a Mercator projection should be used to retain local aspect ratio of spatial resolution at unity. At the Artic, the north pole introduces a grid singularity which can be resolved through displaced pole projections (Murray 1996).
Fig. 18.13↜渀 Schematic representation of the horizontal grid points for the OFAM2 model. Every 20th point is shown
466
G. B. Brassington
Fig. 18.14↜渀 Anticyclonic (↜red) and Cyclonic (↜blue) vortices in the Tasman Sea identified by a pattern matching method and their coherent circulation in the vertical (Brassington et€al. 2010b). The analysis was applied to a day average velocity field from behind real-time analysis of OceanMAPS (Brassington et€al. 2007) for the 4th April 2009
The vertical coordinates of z- (or geopotential), - (sigma or terrain following) and - (isopycnal or density following) are believed to offer favourable properties for different parts of the ocean. The turbulent surface mixed layer for z-, the continental shelf for - and -z and the thermocline and deeper ocean -. Generalised (or hybrid) coordinate systems provide the flexibility to apply these different grid types in the favourable areas. A pattern matching method applied to the daily mean velocity from OceanMAPSv1 to locate approximately shear free rotating motion reveals coherent deep vortices in the Tasman Sea (see Fig.€18.14). Some of these vortices extend from the surface to full depth, some are shallow and other mid-depth and bottom vortices. An animation of this eddy tracking visualisation reveals that there are stratified (Reinaud and Dritschel 2002) and unstratified vortex interactions taking place in the model. The deep vortices correspond to a weak density anomaly that could be represented by the three coordinate systems. However, in all three coordinate systems the emphasis is on concentrating the vertical grid points into the surface layer (i.e., where the variability is greatest) and reduce the resolution with depth. There is minimal observational evidence to support the existence of the deeper features (e.g., Johnson and McTaggart 2010) but it is likely the model representation of space and timescales are biased by the choice of vertical grid. The importance of these deep features is primarily through their influence on the upper ocean vortices. An animation of Fig.€18.14 reveal stratified interactions.
18â•… System Design for Operational Ocean Forecasting
467
Fig. 18.15↜渀 Sections of bathymetry in the Torres Strait as represented in bathymetric data (↜red) and represented in OFAM (↜blue)
The resolution in the surface mixed layer is critical to the representation of the physical processes and improving the accuracy where the majority of applications occur. The resolution of the top cells determine the resolvable scales of the bathymetry. In OFAM, water columns require a minimum of two cells for numerical stability resulting in a minimum column depth of 20€m. This impacts the representation of bays, straits and gulfs. The representation of Torres Strait by OFAM is shown in Fig.€18.15 in blue and compared against the best bathymetry in red. The crosssections at 142.1E–142.4E show that OFAM is too deep and results in a bias in mass transport through the strait. There are several strategies to controlling the transport such as narrowing the opening to calibrate the total volume or to add boundary drag to reduce the flow rate. It is noteworthy that steric anomalies are not a volume conserving process. Therefore apriori it is not clear whether an ocean general circulation model that makes the Boussinesq approximation (i.e., assumes volume conservation, ·uâ•›=â•›0) should represent the sea surface anomalies of mesoscale eddies. Experience has shown that Boussinesq models such as MOM4 (Griffies et€al. 2003) do indeed have sea level anomalies corresponding to eddies within an eddy-resolving simulation
468
G. B. Brassington
and these have been developed into successful ocean forecast systems (Brassington et€al. 2007; Oke et€al. 2008). It is reasonable however to pose the question why? The inroduction of the Boussinesq approximation to models of geophysical fluids can be traced to Spiegel and Veronis (1960). The climate modelling community have also been studying this problem to interpret other large scale processes (see Greatbatch 1994; Ducowicz 1997; McDougall et€al. 2002). It has been determined that there is a duality between the Boussinesq and Non-Boussinesq model equations for a hydrostatic fluid (De Szoeke and Samelson 2002). However it is important to note that in the dual formulation the prognostic variable reverts from sea surface height (which can be remotely observed) to bottom pressure which is poorly observed and has a complex surface. The engineering community have also noted this problem in the context of Benard convection for rigid lid models (Zeytounian 2003). Zeytonian (2003) performs an asymptotic analysis and demonstrates the Boussinesq approximation remains valid with the addition of surface pressure perturbation. An initial value problem for a temperature anomaly is used to demonstrate the behaviour of a strictly volume conserving model with a free-surface formulation Eqs.€3a–e.
dU g + (2 × U)k = −g∇η − ∇ dt ρ0
η
ρdz
(18.3a)
z
∂p = −ρg ∂z
(18.3b)
∇ ·u=0
(18.3c)
dT =0 dt
(18.3d)
dS =0 dt
(18.3e)
where uâ•›=â•›uiâ•›+â•›vjâ•›+â•›wk, Uâ•›=â•›uiâ•›+â•›vj and ρ = ρ(T , S, p). The shallow water equation derived for a free-surface model is given by ∂η + ∇ · U = 0. For the initial val∂t ue problem, T(0)â•›=â•›25°C for xâ•›≠â•›0 and yâ•›≠â•›0 and T(0)â•›=â•›26°C for xâ•›=â•›0, yâ•›=â•›0, zâ•›=â•›1, S(0)â•›=â•›35 psu, uâ•›=â•›0, â•›=â•›0, ∆zâ•›=â•›100€m, Hâ•›=â•›1000€m, ∆tâ•›=â•›20 s. After 20€min of elapsed time the ocean responds to the temperature anomaly by adjusting the local sea level for the small expansion corresponding to the temperature anomaly. This volume however is obtained through a barotropic adjustment where gravity waves radiate from the source (see Fig.€18.16). This response can be detected in all variables for example sea level (Fig.€ 18.16a), pressure gradient (Fig.€ 18.16b), temperature (Fig.€18.16c) and vertical velocity (Fig.€18.16d). In the corrected model we assume that the compressible terms are small for any z+z/2 1 ∂ρ small volume of seawater i.e, dz ≈ 0 . This ensures that conservative ρ0 ∂t z−z/2
18â•… System Design for Operational Ocean Forecasting
469
Fig. 18.16↜渀 Response after 20€min to the initial value problem of a temperature perturbation using a strictly volume conserving model formulation Eq.€ 18.1a–e. a Sea level anomaly. b Pressure gradient. c Surface temperature and d Vertical velocity
numerical schemes remain valid for cell to cell interfacial fluxes (i.e., temperature and salinity remain conserved). However, the vertical integral of the compressible anomalies can be non negligible and measurable (e.g., ocean eddies). Therefore the shallow water equation is formulated to include a compressible correction term,
∂η +∇ ·U= ∂t
η −H
1 ∂ρ dz ρ0 ∂t
(18.4)
This is then reflected in a perturbation to the free-surface that feeds back to the momentum equations through the pressure gradient term. The impact of this correction to the same initial value problem is to reduce the barotropic response by an order of magnitude which is reflected in all variables (see Fig.€18.17). The correction term provides approximately the required volume for the water column that remains local to the temperature anomaly and does not require the volume to be sourced globally. A small residual barotropic response remains due to the discrete computation of the vertical integral is not exact.
470
G. B. Brassington
Fig. 18.17↜渀 Response after 20€min to the initial value problem of a temperature perturbation using a modified volume conserving model formulation Eq.€18.1a–e to include a compression term in the shallow water equation Eq.€18.2 a Sea level anomaly. b Pressure gradient. c Surface temperature and d Vertical velocity
18.8â•…Data Assimilation The statistical machinery for combining background fields with observations based on a least squares approach has been established for some time and successfully applied to objective analyses, weather prediction and seasonal prediction. The GODAE initiated in 1999 coinciding with OceanObs’99 targeted the application of the same methods to the problem of ocean forecasting. The fundamentals of ocean data assimilation and its application to the ocean are presented in this volume by Zaron (2011) and Moore (2011). The principle challenges in ocean data assimilation are prescribing the background error covariance, an observing system that is skewed toward surface observations and the scale of basin and global scale ocean model state space. Unlike ocean modelling, there are few community software that satisfy the essential requirement of model independence although previous attempts have been made to develop modules that could be shared, Chua and Bennett (2001). GODAE itself was developed to support collaborative development amongst the participants and GODAE OceanView aims to maintain that legacy. A summary of
18╅ System Design for Operational Ocean Forecasting Table 18.7↜渀 Properties of ocean data assimilation systems that result in unique design choices in ocean forecasting systems
471
Data assimilation Analysis formulation Background error covariance Localisation
Observation error covariance Computational efficiency
3D—OI, 3DVar, EnOI, 4D—4DVar, EnKF Stationary, non-stationary Multi-variate Error model Statistical significance and sample space Explicit control of far-field covariances Uniform scale Parallelism Rank and condition of inversion Uncorrelated/correlated error Instrument error Representation error Age-error/FGAT Super-obs Localisation Inversion
the design choices are given in Table€18.7 which include the use of 3D or 4D approaches or the use of variational, ensemble or some form of hybrid approach. Four dimensional data assimilation is the formal generalisation of optimal interpolation for a 4D dynamical model (Bennett 2002). However the computational expense of 4D methods, 4DVar or ensemble Kalman Filter, are prohibitive. All operational forecast systems on a basin/global scale use 3D approaches as a practical design choice. At present, only on a regional context have 4D approaches been successfully implemented. The background error covariance in FOAM (Martin et€al. 2007) uses a secondorder autoregressive functional form which includes a synoptic component and mesoscale component. This functional form has similar practical advantage for computations. The NCODA system also uses a second-order autocovariance approach and extends this to include a flow dependent covariance function (Cummings 2005). The implementation of the SEEK filter to the operational Mercator system uses a reduced -order EOF method that is stationary but specified in four seasons (Brasseur et€ al. 2005). The BLUElink ocean data assimilation system (BODAS; Oke et€al. 2008) uses an ensemble optimal interpolation approach. The background error covariances (BEC’s) are specified as a stationary ensemble of model anomalies from an ocean model simulation forced by reanalysis fluxes. The use of model anomalies from the seasonal cycle is based on the assumption that the background errors scale with the mesoscale variability. An advantage of this physical based approach is the ability to capture anisotropic BEC’s that mimic the actual covariances. For example, sea level at a point along the coast (e.g., Thevanard, Australia) exhibits anisotropic covariances extending along the coastline and negligible covariance beyond the shelf break, Fig.€18.18a. The local and far field model anomaly correlations are validated by the anomaly correlations of Australian tide gauges with the Thevanard tide gauge (see Fig.€ 18.18a). The anisotropic correlations are further
472
G. B. Brassington
Fig. 18.18↜渀 a Correlation coefficient of the ensemble of sea level anomalies at Thevanard (↜blue circle) and the ensemble of sea level anomalies at all other points in the Australian domain. The correlation coefficients of sea level anomalies of the Thevanard tide gauge (TG) and all other Australian coastal (TG’s) is shown a circles coloured according to the correlation coefficient. The correlation coefficient of SLA at Thevanard TG with SLA from satellite altimetry is shown for b Jason and c GFO
validated by those of the SLA obtained from satellite altimetry of Jason and GFO in Fig.€18.18b, c respectively. The specification of BECs based on a reduced-rank approach such as ensemble anomalies can exhibit spurious far field covariances due to undersampling. For example the positive correlations in the Coral Sea at ~150E, 17S shown in Fig.€18.18a are assumed to be spurious such that their magnitude become negligible as the ensemble size is increased. Localisation is frequently introduced for a Gaussian (or similar function) distribution as a function of distance from the target. A single e-folding scale is commonly implemented to preserve the symmetry of the inversion, however the spatial scale of the BECs based on the mesoscale variability will scale with latitude or internal Rossby radius as well as impacted by boundaries. A single length scale is therefore a compromise and will typically be sub-optimal for low- and high-latitudes. A formal approach for the detection of optimal localisation scales can be performed with two independent ensembles by determining the length scale where the RMSE of the two increment fields converges indicating the random far field noise is negligible. Standard localisation introduces imbalances to the analysis that lead to initialization shock. This can be improved through the use of a transformation to streamfunction-velocity potential (Kepert 2009) or the use of an adaptive initialization scheme that uses the model to filter imbalances in the target field (Sandery et€al. 2010). A convenient parametric formulae that approximates a Gaussian distribution but has the property that it smoothly converges to zero at a
18â•… System Design for Operational Ocean Forecasting
473 PETSc 8 core SVD serial
2000 1800
Wall Time (seconds)
1600 1400 1200 1000 800 600 400 200
0
10
20
30
40
Domain
Fig. 18.19↜渀 BODAS-MPI (↜blue curve) and BODAS-serial (↜red curve) performance. Data assimilation task was divided into 48 independent computational sub-domains. In the PETSc parallel case each case was run on 8 cores giving a total core usage of 384 cores. The BODAS-MPI software is 8 times faster (for domain 9) and averages ~6.5 times better performance than the serial version
finite length scale is given by Gaspari and Cohn (1999). In this form the analyses beyond a specified localisation length scale is independent which is convenient for parallelisation as is used in BODAS. The computational performance of the assimilation is the critical determinant to optimising the ensemble size, localisation length scale and super-obs and other strategies to reduce the observation space and inversion. The impact of the choice of inversion solver for the OceanMAPS system is shown in Fig.€18.19. An SVD solver is robust for near singular matrices but has a computational cost that scales as N3. The maximum wall clock time for OceanMAPS analysis exceeds 2000s. The PETSc parallel conjugate gradient solver improves the parallel performance by ~8 times (see Fig.€18.19). This reduction in wall clock will permit several performance upgrades in the next version.
18.9â•…Initialization Integrating an ocean model from a specified ocean state requires an initialisation procedure as a priori the target state may not be a balanced model state. There are many sources and applications for initialisation such as climatological target states,
G. B. Brassington
474 Table 18.8↜渀 Properties of ocean initialization that result in unique design choices in ocean forecasting systems
Analysis initialization Balancing Restoring
Linear restoration/nudging Incremental Analysis Updating Adaptive restoration Dynamical balancing Climatological data Spectral nudging
nesting/downscaling and data assimilation increments. In many instances the introduction of the whole target state can result in model shock that degrades the model state, particularly if the model is starting from rest and or with an unperturbed freesurface. A summary of some of the types of initialization/nudging that can be performed and the potential choices are given in Table€18.8. A common approach is to use a relaxation scheme or “nudging” where a forcing term is added that is proportional to the difference between the model state and the target state. In the absence of other forcing terms the model state will follow a natural decay with an e-folding timescale. In practice, the other forcing terms in the model are not negligible everywhere in space and time reducing the effectiveness of the relaxation. The restoring timescale can be modified to increase the dominance of the forcing term however, this must remain bounded to minimise model shock and maintain numerical stability for an operational system. A relaxation initialization procedure was implemented into the BLUElink OceaMAPS version 1 system over a period of 24€h and for the state variable eta, temperature and salinity (Brassington et€ al. 2007). An example of the initialized ocean model state is shown in Fig.€18.20b based on the analysed target state Fig.€18.20a for the 1st August 2009
Fig. 18.20↜渀 Daily mean sea level anomaly for the 1st August 2009 in the Tasman Sea. a OceanMAPS BODAS behind real-time analysis. b OceanMAPS near real-time initialised ocean model state after initialization with nudging for 24€h of eta, temperature and salinity and c Ocean model state after adaptive initialisation for 24€ h of eta, temperature and salinity. Sea level anomaly is represented for the range ±0.5€m and the largest velocity magnitude is 2€m/s
18â•… System Design for Operational Ocean Forecasting
475
in the Tasman Sea. The Tasman Sea is a region with active geostrophic turbulence with a high eddy kinetic energy to total kinetic energy ratios (Schiller et€al. 2008), and identified as one of the largest in the world ocean (Stammer 1997). By inspection the initialised ocean state poorly represents the analysed state leading to a large initial state RMSE in all fields. This is a particularly extreme example but illustrates that the other model forcing terms can prevent the ocean model from reaching the target state within the initialization period. An alternative approach to more efficiently introduce analysis fields is the Incremental Analysis Updating (IAU; Bloom et€al. 1996) which has been applied in the ocean prediction context (Ourmieres et€al. 2006; Martin et€al. 2007) with positive results compared with relaxation. Another important feature of any initialisation procedure is that the forcing term becomes negligible over the finite period the scheme is applied to minimise residual shock after the forcing term is set to zero. In a relaxation scheme the forcing term becomes negligible only if the model state approaches the target state. In an IAU scheme a specific fraction (1/N) of the analysis increment is introduced over N sequential updates which by design reduces the amplitude of the update but remains non-zero at the end of the initialisation procedure. A modified or adaptive relaxation procedure has been developed to inflate the relaxation when the model-target differences are large (at initial time) by making the relaxation timescale also a function of model target differences (Sandery et€al. 2010). An important feature of the scheme is to introduce a threshold on the relaxation to satisfy numerical stability. An example of the adaptive scheme applied to the OceanMAPS BODAS analysis fields is shown in Fig.€18.20c and demonstrates an improvement of 50% RMSE for sea level anomaly and 90% for sea surface temperature (Sandery et€al. 2010). Atmospheric science have put considerable effort into assimilation target states that are dynamically balanced (i.e., do not generate a spurious vertical transport during the initialization). So-called dynamical initialization procedures (Daley 1991) impose dynamical based constraints on the analysis target fields prior to initialization. These procedures are crucial to atmospheric models as errors in vertical transports can produce spurious precipitation and convection leading to a loss of mass to the system. Some ocean systems have implemented similar procedures which minimises spurious gravity waves although it is worth noting that the physical state is less sensitive to these errors compared with the atmosphere. However, this is not the case for coupled bio-geo-chem models where vertical transport errors can result in a sensitive ecosystem response. Both balanced analysis fields and convergent initialization schemes will be important for this application. A common feature of forced model integrations is model drift leading to model bias where the long term average significantly departs from the observed long-term average or climatological state. The drift of the model can result from the accumulation of flux errors as well as errors in the physical model and numerical representation of the physical model. These fundamental problems are the subject of continuous improvement however at any instance there remain outstanding sources of error. Data assimilation based on least squares assume the system is unbiased. In order to address this bias correction schemes have been implemented (e.g., Dee 2005). Alternative approaches include introducing relaxation procedures often referred to
G. B. Brassington
476
as restoring schemes into the ocean model to reduce this effect. The schemes use a climatological or seasonally evolving reference state and use a large relaxation timescale to produce a forcing term that opposes long period departures from the reference state. Although a large timescale is prescribed the size of the relaxation is also a function of the model-reference differences and therefore will be maximal for extreme model states that can occur transiently. An alternative approach is to control the model-reference state difference by estimating the long-term average of the model and remove the influence of the higher frequencies, so called spectral nudging (Thompson et€al. 2006).
18.10╅Forecasting Cycle Robust delivery of ocean forecast services in real-time requires a defined schedule for each sequential step with dependencies and wallclock completion times. A summary of the dependencies for each component of the forecast cycle is summarised in Table€18.9. The in situ and satellite SST observations are provided in near real-time via the GTS and space agencies. However, satellite altimetry IGDR products are provided 3 days behind real-time (Jason-series) and 4 days behind real-time for Envisat. The quality of the ocean analysis is critically dependent on the projection of altimetry and require near a full cycle to improve the least squares analysis. The best estimate ocean state from BODAS is achieved when a symmetric observation window to the analysis date is used for satellite altimetry. In this case, the IGDR products arrive 3 days behind real-time and extend a further 5 days behind real-time for half the period of the Jason series altimeters to complete a cycle. Therefore OceanMAPS performs the best analysis 8 days behind real-time as shown in Fig.€18.21. To fit within a 7 day schedule, the best analysis cycles every 3 and 4 days adjusting the analysis to 8 and 9 days behind real-time. For each forecast cycle a near real-time analysis is performed as close to real-time (5 days behind real-time) with near full coverage of altimetry and an asymmetric observation window. A hindcast using analysis fluxes is brought up to real-time which is further integrated with forecast fluxes. Table 18.9↜渀 Properties of the design of the forecast cycle for ocean forecasting systems
Forecast cycle Hindcast
Forecast
Delayed mode observations Duplicate checking Quality control Analysis NWP flux regridding Analysis Initialization Ocean model hindcast Forecast NWP flux regridding Ocean model forecast
18â•… System Design for Operational Ocean Forecasting
477
Fig. 18.21↜渀 Schematic representation of the operational schedule for OceanMAPSv1.0b. Each cycle is composed of an analysis cycle (↜orange), a near real-time cycle (↜green) and forecast cycle (↜blue). The behind real-time analysis is performed 9 days behind real-time
The mean and 90 percentile range in RMSE for SLA forecast cycles between January 2009 and October 2009 is shown Fig.€ 18.22. There is a consistent deterioration in mean and range in RMSE performance between the behind real-time and near real-time analyses. The RMSE continues to grow from the near real-time analysis with increasing forecast period. The statistics on the 5th and 6th day indicates the RMSE growth has saturated and there is no further skill in the system.
18.11â•…System Performance Ocean forecasting has yet to develop an internationally agreed standard or consensus on the metrics to monitor system performance or estimate forecast errors. Much can and should be borrowed from the numerical weather prediction community which have developed a wide range of general methods (e.g., http://www.cawcr. gov.au/staff/eee/verif/verif_web_page.html and http://cawcr.gov.au/bmrc/wefor/ staff/eee/verif/Stanski_et_al/Stanski_et_al.html, Stanski et€al. 1989). The first international intercomparison experiment (Hernandez et€ al. 2009) has developed a framework defining a series of metric classes. Each operational system were to provide daily average ocean state variables for pre-defined regions of the global ocean including the Indian Ocean for the period 1st February 2008–30th April 2008. The intention was to provide a common forecast period, however the data provided
478
G. B. Brassington OceanMAPSv1.ob, Jan09-Oct 09, (55S - 10S,100E - 170E)
0.14
0.13
RMSE (ηmodel-ηobs), (m)
0.12
0.11
0.1
0.09
0.08
–8
–6
–4
–2
0
2
4
6
Time (days)
Fig. 18.22↜渀 Distribution of OceanMAPSv1.0b RMSE for sea level anomaly for the 9 day behind real-time analysis, 5 day behind real-time analysis and 2, 5 and 6 day forecasts. The mean RMSE is shown by the horizontal black line and the 95th percentile RMSE are shown by the coloured bars. The lines represent a linear estimate of the RMSE performance
corresponded to HYCOM-NCODA (5 day forecast), Mercator (14 day hindcast), UKMetOffice FOAM (Real-time analysis), BLUElink OceanMAPS (9 day hindcast and 3 day forecast). The inhomogeneity of the time periods and the limited period preclude a definitive comparison of performance. Nonetheless, the summary of RMSE, anomaly correlation and model standard deviation against observational data is summarised in the Taylor diagrams (Taylor 2001) in Fig.€18.23a for sea level anomaly and Fig.€18.23b for sea surface temperature. The greyscale background is based on the skill score,
S=
4(1 + R)2
(σˆ f + 1/σˆ f )2 (1 + R0 )2
(18.5)
where R is the anomaly correlation, R0=1, σˆ f = σf /σr , σf and σr are the forecast and observation standard deviations (Taylor 2001). The skill score provides further guidance for interpreting performance for systems with different model variance. In the Timor Sea, the prediction systems are achieving anomaly correlations of 0.7–0.8
18â•… System Design for Operational Ocean Forecasting
479
Fig. 18.23↜渀 Taylor diagram representation of the performance of the HYCOM-NCODA 5 day forecast (↜red), Mercator 14 day hindcast (↜yellow), UKMetOffice FOAM analysis (↜green) and BLUELink OceanMAPS analysis (↜blue), BLUELink OceanMAPS 3 day forecast (aqua) operational systems during the GODAE intercomparison period for the Timor Sea (100E–120E, 22S– 8S) a SLA and b SST. The bias and observations used are summarised above
for analyses and forecasts of SLA and SST indicating the products have useful signal during the Austral-autumn. Further development of these metrics will continue within GODAE OceanView with robust and proven metrics included in “the Guide” to operational ocean forecasting being developed by JCOMM Expert Team on Operational Ocean Forecasting Systems (ET-OOFS). The intercomparison of metrics or concensus amongst the forecast systems is of greatest value when the systems are based on independent components to maximise the variance between models. A review of present operational systems (Dombrowsky et€al. 2009) shows unique ocean models, unique data assimilation approaches and unique numerical weather prediction fluxes. The expected performance of any system configuration can be diagnosed over a hindcast period to achieve a sample space sufficient to estimate statistics. Therefore the forecast system can be monitored using simple metrics based on the background innovations or analysis increments. We will use the Montara wellhead Oil Spill, which took place between the 21st August 2009 and 3rd November 2009, as an example where the performance of the prediction system was important. All of the increments for previous analyses (2nd January 2008–7th November 2009) from the OceanMAPS system were used to form a statistical distribution at each grid point in the Timor Sea. The increments for 22 August 2009 were then compared with this distribution to determine if any were in the 95th or 99th percentile as an indicator of performance that was a statistical outlier (see Fig.€18.24). The region surrounding the Montara wellhead show increments for the analysis exceeded the 99th percentile, indicating the model adjustment was a maximum and potentially unreliable.
480
G. B. Brassington
Fig. 18.24↜渀 BLUElink OceanMAPS SLA analysis increments on the 22nd August 2009 that exceed the 95th (↜green) and 99th (↜red) percentile of all increments (2nd January 2008 to 7th November 2009) at each ocean model grid point
We can extend this analysis by taking the distribution for all grid points within the region 123.5E–125.5E, 11.6S–13.6S, surrounding the Montara well and determine the normalized frequency for each grid point into the increment bins [−0.15:0.01:0.15]. The median of the 400 grid point normalized frequency is shown as the black line in Fig.€18.25 with the 90th percentiles shown shaded in grey. The distribution of increments for the analyses 20nd, 29th August and 5th, 12th, 19th September 2009 are shown in Fig.€18.25 in colour normalised for visualisation. The increments shown in Fig.€18.24 are shown in dark blue and are a statistical outlier. The subsequent analyses, with 7 day separation, have increments that are within the higher frequency range and indicate the system recovered and behaved normally. An important distinction should be made between methods that monitor performance of the system (e.g., skill scores, Murphy 1988) and methods that estimate the statistics of forecast errors to estimate the expected error. The expected error estimates are however by definition applicable to the most frequently occurring states. These estimates do not apply to events that are rare, for example extreme events. This is an important class of events because designs and operational decisions that are based on the expected conditions can fail if unlikely events do occur and in some instance might result in loss of life or property. Specific methods are required to address this class of problem (e.g., Garrett and Muller 2008). The physical processes of the extreme event may need to be considered to improve the estimated
18â•… System Design for Operational Ocean Forecasting
481
Fig. 18.25↜渀 Normalized frequency of all BLUElink OceanMAPS SLA increments (2nd January 2008 to 7th November 2009) and limited to the region (123.5E–125.5E, 11.6S–13.6S) surrounding the Montara Oil Well binned for increments [−0.15:0.01:0.15]. The median normalized frequency of all the grid points (20â•›×â•›20) (↜solid line) and 90th percentile (↜shaded grey). The distribution of increments for the 22nd, 29th August and 5th, 12th, 19th September 2009 shown in color normalized to a scale of 0.01 (i.e., freqâ•›×â•›(0.01/max(freq)) for visualisation
likelihood. For example, consider coastal fog that results from cool sea surface temperatures from a coastal upwelling. The presence of fog prevents the AVHRR from observing the cool SST’s and the microwave resolution does not observe close to the coast. An SST analysis will persist the background state and depart from the true state. As cloud frequently occurs, a simple statistic will not separate fog events from other clouds and the expected SST error would be low as there is skill in persisted SST’s. However, if other factors such as the upwelling favourable winds and cloud type of fog are included a higher error might be estimated.
18.12â•…Conclusion State estimation and forecasting for the ocean’s mesoscale is a grand challenge. In particular the ocean state at these scales continues to be under-observed and our understanding of the dynamics is at the frontier of ocean science. Despite this there
482
G. B. Brassington
is now ample and growing evidence that the first generation systems have demonstrated that the existing global ocean observing system is sufficient to constraint the mesoscale variability. These systems have achieved a performance that is positively impacting real applications. At the same time there is evidence that the performance is patchy in time and in space and sensitive to the quality and coverage of the observing system and forcing in real-time. None the less this provides a solid foundation for continued advances and improved performance. A complex system such as an ocean prediction system is composed of several components each of which have critical design choices that impact the performance and cost of the total system. In the first generation forecast system there are numerous choices taken that are scientifically robust given the constraints imposed by the observing system and computational costs available at the time of development. There are also many decisions that are taken that compromise the performance for practical constraints of completion of integrations within a finite schedule. These decisions and methods will continue to be revised as the constraints are reduced and new methods and models are developed. There are numerous directions for which ocean forecasting will be extended to optimise forecast skill including: 4D data assimilation, ensemble forecasting, coastal ocean forecasting, coupled ocean-wave-atmosphere modelling (e.g., Fan et€ al. 2009), coupled ocean-wave-atmospheric data assimilation and more just within the physical modelling space. The challenge moving forward will be the development of methods that continue to abstract the complexity to make it more manageable. It is very likely unavoidable that “black boxes” become more prevalent. However, this must be done with sufficient rigor that it is readily verified that the component, sub-system and system are solving the right problem to a known precision. This development will need to be undertaken in parallel to improvements in the ocean observing systems and computational hardware and software technologies. Acknowledgements╇ OceanMAPS was developed by the BLUElink a joint project of the Bureau of Meteorology, CSIRO and Royal Australian Navy and the BLUElink science team. The author gratefully acknowledges Claire Spillman, Nicholas Summons, Paul Sandery, Justin Freeman, Leon Majewski for contributions to the figures in this manuscript.
References Andreu-Burillo I, Brassington GB, Oke PR, Beggs H (2009) Including a new data stream in BLUElink ocean data assimilation. Aust Met Oceanogr J 59:77–86 Bailey R, Gronell A, Phillips H, Meyers G, Tanner E (1994) CSIRO cookbook for quality control of Expendable Bathythermograph (XBT) data. CSIRO marine laboratories Report No. 220, p€75 Beggs H, Smith N, Warren G, Zhong A (2006) A method for blending high—resolution SST over the Australian region. BMRC Res Lett 5:7–11 Bennett AF (2002) Inverse modelling of the ocean and atmosphere. Cambridge University Press, Cambridge, p€234
18â•… System Design for Operational Ocean Forecasting
483
Bloom SC, Takacs LL, da Silva AM, Ledvina D (1996) Data assimilation using incremental analysis updates. Mon Weather Rev 124:1256–1271 Bourles BR, Lumpkin R, McPhaden MJ, Hernandez F, Nobre P, Campos E, Yu L, Planton S, Busalacchi A, Moura AD, Servain J, Trotte J (2008) The PIRATA program: history, accomplishments, and future directions. Bull Am Meteorol Soc 89:1111–1125 Brasseur P, Bahurel P, Bertino L, Birol F, Brankart J-M, Ferry N, Losa S, Remy E, Schroter J, Skachko S, Testut C-E, Tranchant B, Van Leeuwen PJ, Verron J (2005) Data assimilation for marine monitoring and prediction: the MERCATOR operational assimilation systems and the MERSEA developments. Q J R Meteorol Soc 131:3561–3582 Brassington GB (2009) Ocean prediction issues related to weather and climate prediction. Vision paper (Agenda item 8.5), WMO CAS XV, Seoul Korea, 18–25 Nov 2009 Brassington GB (2010) Estimating surface divergence of ocean eddies using observed trajectories from a surface drifting buoy. J Atmos Oceanic Technol. doi:10.1175/2009JTECHO651.1 Brassington GB, Pugh T, Spillman C, Schulz E, Beggs H, Schiller A, Oke PR (2007) BLUElink> development of operational oceanography and servicing in Australia. J Res Pract Inf Technol 39:151–164 Brassington GB, Hines A, Dombrowsky E, Ishizaki S, Bub F, Ignaszewski M (2010a) Short- to medium range ocean forecasts: delivery and observational requirements. In: Hall J, Harrison DE, Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society vol 1, Venice, Italy, 21–25 September 2009. ESA Publication WPP-306. doi: 10.5270/OceanObs09.pp08 Brassington GB, Summons N, Lumpkin R (2010b) Observed and simulated Lagrangian and eddy characteristics of the East Australian current and Tasman sea. Deep Sea Res Res Part II. doi: 10.1016/j.dsr2.2010.10.001 Chelton DB, Ries JC, Haines BJ, Fu L-L, Callahan PS (2001) Satellite altimetery. In: Fu L-L, Cazenave A (eds) Satellite altimetry and earth sciences. Academic Press, San Diego, pp€1–131 Chelton DB, Schlax MG, Samelson RM, de Szoeke RA (2007) Global observations of large oceanic eddies. Geophys Res Lett 34:L15606. doi:10.1029/2007GL030812 Chua B, Bennett AF (2001) An inverse ocean modeling system. Ocean Model 3:137–165 Cummings JA (2005) Operational multivariate ocean data assimilation, Q J R Meteorol Soc 131:3583–3604 Daley R (1991) Atmospheric data analysis. Cambridge University Press, New York, p€457 Davidson F, Allen A, Brassington GB, Breivik O, Daniel P, Kamachi M, Sato S, King B, Lefevre F, Sutton M, Kaneko H (2009) Application of GODAE ocean current forecasts to search and rescue and ship routing. Oceanography 22(3):176–181 Dee DP (2005) Bias and data assimilation. Q J R Meteorol Soc 131:3323–3343 Desai SD, Haines BJ, Case K (2003) Near real time sea surface height anomaly products for Jason-1 and Topex/Poseidon user manual. NASA, JPL D-26281, p€13 De Szoeke RA, Samelson R (2002) The duality between the Boussinesq and Non-Boussinesq hydrostatic equations of motion. J Phys Oceanogr 12:2194–2203 Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanography 22(3):80–95 Donlon CJ, Minnett P, Gentemann C, Nightingale TJ, Barton IJ, Ward B, Murray J (2002) Towards improved validation of satellite sea surface skin temperature measurements for climate research. J Climate 15(4):353–369 Drennan WM, Graber HC, Hauser D, Quentin C (2003) On the wave age dependence of wind stress over pure wind seas. J Geophys Res 108(C3):8062. doi:10.1029/2000JC000715 Ducet N, Le Traon P-Y, Reverdin G (2000) Global high resolution mapping of ocean circulation from TOPEX/Poseidon and ERS-1/2. J Geophys Res 105(19):19477–19498 Ducowicz JK (1997) Steric sea level in the Los Alamos POP code—Non-Boussinesq effects, numerical methods in atmospheric and oceanic modeling. In: Lin C, Laprise R, Richie H (eds) The Andre Robert memorial volume, Canadian meteorological and oceanographic society, NRC Research Press, Ottawa, p€533–546
484
G. B. Brassington
Evensen G (2003) The Ensemble Kalman Filter theory and practical implementation. Ocean Dyn 118:1–23 Fan Y, Ginis I, Hara T (2009) The effect of wind–wave–current interaction on air–sea momentum fluxes and ocean response in tropical cyclones. J Phys Oceanogr 39:1019–1034 Freeland H, Roemmich D, Garzoli S, LeTraon P, Ravichandran M, Riser S, Thierry V, Wijffels S, Belbeoch M, Gould J, Grant F, Ignaszewski M, King B, Klein B, Mork K, Owens B, Pouliquen S, Sterl A, Suga T, Suk M, Sutton P, Troisi A, Vélez-Belchi P, Xu J (2010) Argo—a decade of progress. In: Hall J, Harrison DE, Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society vol 2, Venice, Italy, 21–25 September 2009. ESA Publication WPP-306. doi: 10.5270/OceanObs09.cwp.32 Garrett C, Müller P (2008) Supplement to “extreme events”. Bull Am Meteorol Soc 89:ES45ES56. doi:10.1175/2008BAMS2566.2 (by Chris Garrett and Peter Müller Bull Am Meteorol Soc 89:1733) Gaspari G, Cohn SE (1999) Construction of correlation functions in two and three dimensions. Q J R Meteorol Soc 125:723–757 Goni G, Meyers G, Ridgeway K, Behringer D, Roemmich D, Willis J, Baringer M, Ichi I, Wijffels S, Reverdin G, Rossby T (2010) Ship of opportunity program. OceanObs’09 ESA Special Publication (in press) Greatbatch RJ (1994) A note on the representation of steric sea level in models that conserve volume rather than mass. J Geophys Res 99:12767–12771 Griffies SM, Harrison MJ, Pacanowski RC, Rosati A (2003) A technical guide to Mom4 Gfdl ocean group technical Report No. 5, NOAA/Geophysical Fluid Dynamics Laboratory Version prepared on 23 Dec 2003 Hackett B, Comerma E, Daniel P, Ichikawa H (2009) Marine oil pollution prediction. Oceanography 22(3):168–175 Hernandez F, Bertino L, Brassington GB, Chassignet E, Cummings J, Davidson F, Drevillon M, Garric G, Kamachi M, Lellouche J-M, Mahdon R, Martin MJ, Ratsimandresy A, Regnier C (2009) Validation and intercomparison studies within GODAE. Oceanography 22(3):128–143 Hurlburt HE, Brassington GB, Drillet Y, Kamachi M, Benkiran M, Bourdalle-Badie R, Chassignet EP, Jacobs GA, Le Galloudec O, Lellouche JM, Metzger EJ, Oke PR, Pugh TF, Schiller A, Smedsted OM, Tranchant B, Tsujino H, Usui N, Wallcraft AJ (2009) High resolution global and basin-scale ocean analyses and forecasts. Oceanography 22(3):110–127 Johnson GC, McTaggart KE (2010) Equatorial Pacific 13°C Water eddies in the eastern subtropical South Pacific Ocean. J Phys Oceanogr 40:226–236 Jones PW (1999) First- and second-order conservative remapping schemes for grids in spherical coordinates. Mon Weather Rev 127:2204–2210 Kamachi M, Kuragano T, Ichikawaj H, Nakamura H, Nishina A, Isobe A, Ambe D, Arais M, Gohda N, Sugimoto S, Yoshita K, Sakura T, Ubold F (2004) Operational data assimilation system for the Kuroshio South of Japan: reanalysis and validation. J Oceanogr 60:303–312 Kepert JD (2009) Covariance localisation and balance in an Ensemble Kalman filter. Q J R Meteorol Soc 135:1157–1176 Large WG, Danabasoglu G, Doney SC, McWilliams JC (1997) Sensitivity to surface forcing and boundary layer mixing in a global ocean model: annual-mean climatology. J Phys Oceanogr 27:2418–2447 Leonard BP, Lock AP, Macvean MK (1995) The nirvana scheme applied to one-dimensional advection. Int J Numerical Methods Heat Fluid Flow 5:341–377 Le Provost C (2001) Ocean tides. In: Fu L-L, Cazenave A (eds) Satellite altimetry and earth sciences. Academic Press, San Diego, pp€267–303 Le Traon P-Y (2011) Satellites and operational oceanography. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. doi:10.1007/978-94-007-0332-2-18, Springer, Dordrecht, pp 29–54 Lorenc AC (1981) A global three-dimensional multivariate statistical interpolation scheme. Mon Weather Rev 109:701–721
18â•… System Design for Operational Ocean Forecasting
485
Lorenc AC (2003) The potential of the ensemble Kalman filter for NWP—A comparison with 4DVar. Q J R Meteorol Soc 129:3183–3203 Martin MJ, Hines A, Bell MJ (2007) Data assimilation in the FOAM operational short-range ocean forecasting system: a description of the scheme and its impact. Q J R Meteorol Soc 133:981– 995 McDougall TJ, Greatbatch RJ, Lu Y (2002) On the conservation equations in oceanography: how accurate are Boussinesq ocean models? J Phys Oceanogr 32:1574–1584 McInnes KM, Leslie LM, McBride JL (1992) Numerical simulation of cut-off lows on the Australian east coast: sensitivity to sea surface temperature. Int J Climatol 12:1–13 McPhaden MJ, Delcroix T, Hanawa K, Kuroda Y, Meyers G, Picaut J, Swenson M (2001) The El Niño/Southern Oscillation (ENSO) observing system. In: Koblinski C, Smith N (eds) Observing the ocean in the 21st century. Australian Bureau of Meteorology, Melbourne, pp€231–246 McPhaden MJ, Meyers G, Ando K, Masumoto Y, Murty VSN, Ravichandran M, Syamsudin F, Vialard J, Yu L, Yu W (2009) RAMA: the research moored array for African-Asian-Australian monsoon analysis and prediction. Bull Am Meteorol Soc 90:459–480 McWilliams JC, Sullivan PP, Moeng C-H (1997) Langmuir turbulence in the ocean. J Fluid Mech 334:1–30 Melville WK (1996) The role of wave breaking in air- sea interaction. Ann Rev Fluid Mech 28:279–321 Moore A (2011) Adjoint applications. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. doi:10.1007/978-94-007-0332-2-18, Springer, Dordrecht, pp 351–379 Murphy AH (1988) Skill scores based on the mean square error and their relationships to the correlation coefficient. Mon Weather Rev 116:2417–2424 Murray RJ (1996) Explicit generation of orthogonal grids for ocean models. J Comput Phys 126:251–273 Oke PR, Schiller A, Griffin DA, Brassington GB (2005) Ensemble data assimilation for an eddyresolving ocean model of the Australian region. Q J R Meteorol Soc 131:3301–3311 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink ocean data assimilation system (BODAS). Ocean Model 21:46–70 Ourmieres Y, Brankart L, Berline L, Brasseur P, Verron J (2006) Incremental analysis update implementation into a sequential ocean data assimilation system. J Atmos Ocean Technol 23:1729–1744 Pascual A, Boone C, Larnicol G, Le Traon PY (2009) On the quality of real-time altimeter gridded fields: comparison with in situ data. J Atmos Ocean Technol 26:556–569 Price JF (1981) Upper ocean response to a hurricane. J Phys Ocean 11:153–175 Prandle D, Flemming NC (eds) (1998) The science base of EuroGOOS. EuroGOOS Publication No. 6, 1998, EG97.14, unpaginated Purser RJ, Parrish D, Masutani M (2000) Meteorological observational data compression; an alternative to conventional “super-obbing.” NCEP Office Note 430, p€13. Available online at http:// www.emc.ncep.noaa.gov/officenotes/FullTOC.html Ravichandran M (2011) In situ ocean observing system. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. doi:10.1007/978-94-007-0332-2-18, Springer, Dordrecht, pp 55–90 Redler R, Valcke S, Ritzdorf H (2010) OASIS4—a coupling software for next generation earth system modelling. Geosci Model Dev 3:87–104 Reinaud JN, Dritschel DG (2002) The merger of vertically offset quai-geostrophic vortices. J Fluid Mech 469:287–315 Rixen M, Book JW, Orlic M (2009) Coastal processes: challenges for monitoring and prediction. J Mar Syst 78(1):S1–S2. ISSN 0924-7963, doi:10.1016/j.jmarsys.2009.01.006, Nov 2009 Robinson I (2006) Satellite measurements for operational ocean models. In: Chassignet EP, Verron J (eds) Ocean weather forecasting: an integrated view of oceanography. Springer, Netherlands, pp€147–189
486
G. B. Brassington
Sanderson B, Brassington G (1998) Accuracy in the context of a control-volume model. Atmosphere-Ocean 36:355–384 Sandery PA, Brassington GB, Freeman J (2010) Adaptive nonlinear dynamical initialization. J Geophys Res. doi: 10.1029./2010JC006260 Schiller A, Oke PR, Brassington GB, Entel M, Fiedler RAS, Griffin DA, Mansbridge JV, Meyers GA, Ridgway KR, Smith NR (2008) Eddy-resolving ocean circulation in the Asia-Australian region inferred from an ocean reanalysis effort. Prog Oceanogr 76:334–365 Seaman R, Bourke W, Steinle P, Hart T, Embery G, Naughton M, Rikus L (1995) Evolution of the Bureau of Meteorology’s global assimilation and prediction system, Part 1: analyses and initialization. Aust Met Mag 44:1–18 Smith N, Lefebvre M (1997) The Global Ocean Data Assimilation Experiment (GODAE). Monitoring the oceans in the 2000s: an integrated approach. International Symposium, Biarritz, 15–17 Oct 1997 Smith WHF, Sandwell DT (1997) Global seafloor topography from satellite altimetry and ship depth soundings. Science 277:1956–1962 Sobel D (1995) Longitude: the true story of a Lone Genius who solved the greatest scientific problem of his time. Walker & Company, New York, p€216 Spiegel EA, Veronis G (1960) On the Boussinesq approximation for a compressible fluid. Astrophys J 131:442–447 Stanski HR, Wilson LJ, Burrows WR (1989) Survey of common verification methods in meteorology. World Weather Watch Tech. Report No.8, WMO/TD No.358, WMO, Geneva, p€114 Stammer D (1997) Global characteristics of ocean variability from regional TOPEX/POSEIDON altimeter measurements. J Phys Oceanogr 27:1743–1769 Taylor KE (2001) Summarizing multiple aspects of model performance in a single diagram. J Geophys Res 106(D7):7183–7192 Thompson KR, Wright DG, Lu Y, Demirov E (2006) A simple method for reducing seasonal bias and drift in eddy resolving ocean models. Ocean Model 13:109–125 Zaron E (2011) Basics of data assimilation and inverse methods. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. doi:10.1007/978-94-007-0332-2-18, Springer, Dordrecht, pp 321–350 Zeytounian RKh (2003) Joseph Boussinesq and his approximation: a contempory view. C R Mec 331:575–586
Chapter 19
Integrating Coastal Models and Observations for Studies of Ocean Dynamics, Observing Systems and Forecasting John L. Wilkin, Weifeng G. Zhang, Bronwyn E. Cahill and Robert C. Chant
Abstract╇ In coastal oceanography, simulation models are used to a variety of ends. Idealized studies may address particular dynamical processes or features of coastline and bathymetry; reproducing the circulation in a geographical region can compliment studies of ecosystems and geomorphology; and models may be employed to simulate observing systems and to forecast oceanic conditions for practical operational needs. Frequently, the interplay between multiple forcing mechanisms, geographic detail, stratification, and nonlinear dynamics, is significant, and this demands that ocean models for coastal applications are capable of representing a comprehensive suite of dynamical processes. Drawing on a series of recent modelbased studies of the inner to mid-shelf region of the Middle Atlantic Bight (MAB) we illustrate, by example, these methodologies and the breadth of dynamical processes that influence coastal ocean circulation. We demonstrate that the recent introduction of variational methods into coastal ocean simulation is a development that greatly enhances our ability to integrate models with data from the evolving coastal ocean observatories for the purposes of improved ocean prediction, adaptive sampling and observing system design.
19.1â•…Introduction The discharge of rivers to continental shelf seas represents an important mechanism by which human activities in urban watersheds impact the neighbouring marine environment. Biogeochemical, sediment, and ecosystem processes that determine the ultimate fate of nutrients and pollutants delivered into the coastal ocean by river sources depends on the pathways and time scales of dispersal of these buoyant discharges. How coastal models, in conjunction with observations, can be used to study these circulation processes is illustrated here by example, by reviewing results J. L. Wilkin () Institute of Marine and Coastal Sciences, Rutgers, The State University of New Jersey, New Brunswick, NJ, USA e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_19, ©Â€Springer Science+Business Media B.V. 2011
487
488
J. L. Wilkin et al.
from a recent series of model-based studies of the Hudson River outflow into New York Bight. On many coasts, the flux of freshwater from rivers or groundwater first enters an estuary where it mixes with more salty waters of oceanic origin before reaching the adjacent shelf sea. The salinity of the estuary discharge can be sufficiently low that horizontal buoyancy gradients are a significant force influencing the plume circulation. A classical view of the ensuing dynamics is that the buoyancy force balances the Coriolis force, and the outflow turns to the right (in the northern hemisphere) and forms a narrow coastal current a few internal Rossby radii in width trapped against the coast. If the front that defines the outer extent of the low salinity water reaches the sea floor then the plume becomes bottom-attached and details of the coastal bathymetry strongly influence the plume trajectory. Alternatively, if the low salinity discharge is confined to a relatively thin surface layer the plume is described as surface-advected (Yankovsky and Chapman 1997) and may be more responsive to local wind forcing. Whether a plume falls into the surface-advected or bottom-trapped regime, or transitions from one regime to the other, depends on river discharge, bathymetry, and mixing within the surface and bottom boundary layers. It is often the case that the freshwater transport of the coastal current is less than the freshwater flux out of the estuary, particularly during episodes of elevated river discharge, and this leads to the formation of a pronounced low salinity bulge near the estuary outflow. The across shelf scale of the bulge can be several times the width of the coastal current, especially for a surface-advected plume. The low buoyancy of the bulge evolves an anti-cyclonic circulation that significantly prolongs the duration that water discharged from the estuary is retained in the vicinity of the estuary mouth. Laboratory rotating tank experiments have shown that the coastal current can receive as little as one third of the estuary outflow (Avicola and Huq 2003), or in extreme circumstances the recirculation can pinch off from the coastal current and for a period of time direct all flow into the bulge (Horner-Devine et€al. 2006). Numerical model studies show that the ratio of coastal current transport to estuary discharge decreases as the flow becomes increasingly non-linear as characterized by the Rossby number, i.e. the ratio of inertial to rotational forces (Fong and Geyer 2002; Nof and Pichevin 2001). Thus river flow rate, vertical turbulent mixing within the estuary and on the shelf, bathymetric detail, stratification, non-linear dynamics, and wind forcing are all factors that influence river plume dispersal characteristics. Shelf-wide alongshelf mean currents established by regional winds (Fong and Geyer 2002) or by upstream or offshore remote forcing further influence the circulation (Zhang et€al. 2009a). Consequently, ocean models that seek to simulate interactions between river discharges and the adjacent inner shelf must be quite comprehensive in the suite of dynamical processes that they represent. In this article we demonstrate the capabilities of one such model, the Regional Ocean Modelling System (ROMS; www.myroms.org), by summarizing results from a sequence of studies of the Hudson River’s discharge into the coastal ocean based on efforts during the Lagrangian Transport and Transformation Experiment (LaTTE) (Chant et€al. 2008). The Hudson River watershed is highly industrialized,
19â•… Integrating Coastal Models and Observations
489
and the LaTTE field program included observations—following the river plume associated with the spring freshest in the years 2004, 2005 and 2006—of phytoplankton and zooplankton assemblages, and natural and human-source nutrients, organic matter, and metal contaminants. An emphasis of the project was to investigate how the plume’s physical structure influenced biogeochemical processes. Key processes in this regard include mixing that dilutes salinity and influences certain chemical reactions, light levels that affect photochemistry, and residence times and transport pathways that can impact rates of bioaccumulation and modify where regions of net export of particulate suspended matter might occur. Dynamical and computational features of ROMS that are pertinent to the LaTTE simulations (and coastal processes in general) are described in Sect.€19.2, and revisited in Sect.€19.4 in a discussion of aspects of New York Bight (NYB) regional dynamics worthy of further analysis. Section€19.3 describes modelling approaches we have taken to address specific scientific objectives. Section€ 19.3.1 considers forward simulations initialized from climatology and forced with observed river flows and an atmospheric forecast model used for short-term forecasting for adaptive sampling during the LaTTE field experiments, and idealized studies of how the plume responds to the wind. Multi-year simulations to examine long-term transport and dispersal pathways, and the mean dynamics of the circulation, are presented in Sect.€ 19.3.2. Section€ 19.3.3 describes a reanalysis of the 2006 LaTTE season using Incremental Strong Constraint 4-Dimensional Variational Data Assimilation (IS4DVAR) to adjust initial conditions to each daily forecast cycle, and gives a brief overview of how variational methods might also be employed to assist observing system operation. In Sect.€ 19.5 it is summarized how the studies described here collectively illustrate how coastal models are being increasingly integrated with the growing network of regional coastal ocean observing systems to better understand coastal ocean processes, and improve ocean predictions.
19.2â•…Regional Ocean Modelling System 19.2.1 Dynamical and Numerical Core ROMS solves the hydrostatic, Boussinesq, Reynolds-averaged Navier-Stokes equations in terrain-following vertical coordinates. It employs a split-explicit formulation whereby the 2-dimensional continuity and barotropic momentum equations are advanced using a much smaller time step than the 3-dimensional baroclinic momentum and tracer equations. The ROMS computational kernel is described elsewhere (Shchepetkin and McWilliams 2005, 2009a, b) and will not be detailed here, but we do note several aspects of the kernel that are particularly attractive for coastal ocean simulation. These include a formulation of the barotropic mode equations that accounts for the non-uniform density field so as to reduce aliasing and coupling errors associated with the split-explicit method (Higdon and de Szoeke 1997) in terrain-following coordinates. Temporal-weighted averaging of the barotropic mode prevents alias-
490
J. L. Wilkin et al.
ing of unresolved signals into the slow baroclinic mode while accurately representing barotropic motions resolved by the baroclinic time step (e.g. tides and coastaltrapped waves). Several features of the kernel substantially reduce pressure-gradient force truncation error that has been a long-standing problem in terrain following coordinate ocean models. A finite-volume, finite-time-step discretization for the tracer equations improves integral conservation and constancy preservation properties associated with the variable free surface, which is important in coastal applications where the free surface displacement represents a significant fraction of the water depth. A positive-definite MPDATA (multidimensional positive definite advection transport algorithm) advection scheme (Smolarkiewicz 1984) is available, which is attractive for biological tracers and sediment concentration. A monotonized, highorder vertical advection scheme for sinking of sediments and biological particulate matter integrates depositional flux over multiple grid cells so it is not constrained by the CFL criterion (Warner et€al. 2008a). Interested readers are referred to Shchepetkin and McWilliams (2009b) for a thorough review of the choices of algorithmic elements that make ROMS particularly accurate and efficient for high-resolution simulations in which advection is strong, and currents, fronts and eddies are approximately geostrophic—characteristics of mesoscale processes in the coastal ocean and adjacent deep sea.
19.2.2 Vertical Turbulence Closure ROMS provides users with several options for the calculation of the vertical eddy viscosity for momentum and eddy diffusivity for tracers. In the majority of recent ROMS coastal applications the choice of vertical turbulence-closure formulation has been either (1) a k-profile parameterization (KPP) for both surface and bottomboundary layers (Large et€al. 1994; Durski et€al. 2004), (2) Mellor-Yamada level 2.5 (MY25) (Mellor and Yamada 1982), or (3) the generic length-scale (GLS) method (Umlauf and Burchard 2003) which encompasses a suite of closure and stability function options. The KPP scheme specifies turbulent mixing coefficients in the boundary layers based on Monin-Obukhov similarity theory, and in the interior principally as a function of the local gradient Richardson number (Large et€al. 1994; Wijesekera et€al. 2003). The KPP method is diagnostic in the sense it does not solve a time evolving (prognostic) equation for any of the elements of the turbulent closure, whereas the MY25 and GLS schemes are of the general class of closures where two prognostic equations are solved—one for turbulent kinetic energy and the other related to turbulence length scale. Warner et€ al. (2005) describe the implementation of the GLS formulation in ROMS, and contrast the performance of the various GLS sub-options (representing different treatments of the turbulent length scale) and the historically widely used MY25 scheme. They find that the differing schemes lead to differences in the vertical eddy mixing profiles, but the net impact on profiles of model state variables
19â•… Integrating Coastal Models and Observations
491
(velocities and tracers) is relatively minor. Wijesekera et€al. (2003) reach similar conclusions, but note that results for KPP tend to be less similar to GLS and MY25, which are quite alike. Warner et€al. (2005) found that suspended sediment concentrations in their sediment transport model are much more sensitive to the choice of closure than is salinity in estuarine mixing simulations. In the LaTTE simulations we use the GLS k-kl closure option, which is essentially an implementation of MY25 within the GLS conceptual framework.
19.2.3 Forcing 19.2.3.1â•…Air-Sea Fluxes Air-land-sea contrasts, orography, upwelling, fog, and tidal mixing over variable bathymetry in the coastal ocean can all contribute to creating wind and air temperature conditions at sea level that have much shorter time and length scales than typically occur further offshore or in the open ocean. Accordingly, coastal ocean simulations benefit from the availability of spatially and temporally well-resolved meteorological forcing and accurate parameterization of air-sea momentum and heat fluxes. Surface atmospheric forcing in the LaTTE simulations made use of two sets of marine boundary layer products derived from atmospheric models. The short time scale simulations (Sect.€19.3.1) and IS4DVAR reanalysis (Sect.€19.3.3) used marine boundary conditions (downward long-wave radiation, net shortwave radiation, 10-m wind, 2-m air temperature, pressure and humidity) at 3-hourly intervals from the North American Mesoscale model (NAM; Janjic 2004)—a 12€ km resolution 72-h forecast system operated by the National Centers for Environmental Prediction (NCEP). The multiyear simulations (Sect.€19.3.2) used marine boundary layer conditions taken from the North American Regional Reanalysis (Mesinger 2006)—a 25-km resolution 6-hourly interval data assimilative reanalysis product. Air-sea fluxes of momentum and heat were computed using standard bulk formulae (Fairall et€al. 2003) from the atmospheric model based marine boundary layer conditions in conjunction with the sea surface temperature from ROMS. 19.2.3.2â•…River Inflows and Open Boundary Conditions In coastal Regions of Freshwater Influence (ROFI) (Hill 1998), lateral buoyancy input from rivers produces density gradients that are principally horizontal, which leads to relatively weak vertical stability compared to the vertical stratification generated from comparable surface air-sea buoyancy fluxes. Density stratification in ROFI subsequently arises from the baroclinic adjustment of these density gradients, and destratification and restratification can occur rapidly in response to changing rates of vertical mixing associated with wind forcing and tides (which may have significant spring-neap variability in intensity).
492
J. L. Wilkin et al.
On some coasts, groundwater discharge directly to the coastal ocean or freshwater input from numerous small streams and rivers can be significant, but in the NYB terrestrial buoyancy input is overwhelmingly from large rivers, and predominantly from the Hudson. For river input to the LaTTE model we used daily average observations of river discharge from U.S. Geological Survey gauging stations on the Hudson and Delaware rivers, modified to include ungauged portions of the watershed following Chant et€al. (2008). At the open boundaries to the LaTTE model domain, simple Orlanski-type radiation conditions were applied to tracers (temperature and salt) and 3-D velocity. Our emphasis here on the buoyancy driven circulation associated with the Hudson River plume allows this simplification with its implicit neglect of the influence of remote sources of freshwater and heat. Open boundary sea level and depth-averaged velocity variability was set using the Chapman (1985) and Flather (1976) schemes to radiate surface gravity waves while also imposing tidal harmonic velocity variability derived from a regional tide model (Mukai et€al. 2002). In the long multiyear simulations (Sect.€19.5), the boundary depth averaged velocity was augmented with the estimate of mean southwestward current on the shelf derived by Lentz (2008) based on long-term current-meter observations and momentum balance arguments.
19.2.4 Sub-Models for Interdisciplinary Studies ROMS incorporates a set of sub-models for interdisciplinary applications that are integrated with the dynamical kernel. Among these are several ecosystem models formulated in terms of Eulerian functional groups wherein 3-D tracers representing nutrients, plankton, zooplankton, detritus, etc., expressed in terms of some common currency (usually equivalent nitrogen concentration), are advected and mixed according to the same transport equations as the dynamic tracers. Haidvogel et€al. (2008) give an overview of examples of these models, which range in complexity from a four component nitrogen-based (NPZD) model (Powell et€al. 2006; Moore et€al. 2009) to a carbon based bio-optical model (EcoSim) (Bissett et€al. 1999; Cahill et€al. 2008) with a spectrally resolved light field and more than 60 state variables representing four phytoplankton, five pigments, five elements, bacteria, dissolved organic matter, and detritus. A Community Sediment Transport Model (CSTM; Warner et€al. 2008a) and wave model (SWAN, Surface Waves in the Nearshore; Booij et€al. 1999) are integrated with ROMS for studies of sediment dynamics and circulation in nearshore environments; wave radiation stresses are included in the momentum equations and wave-current interaction that enhances bottom stress is included in the bottom boundary layer dynamics. A user-defined set of non-cohesive sediment classes is tracked, with differential erosion and deposition of the various size classes contributing to the evolution of a multi-level sediment bed with varying layer thickness, porosity, and mass, which allows computation of bed morphology and stratigraphy. The application of the ROMS/ SWAN/CSTM to studies of sediment morphology, sorting and transport in an idealized tidal inlet and Massachusetts Bay are presented by Warner et€al. (2008a).
19â•… Integrating Coastal Models and Observations
493
19.3╅ROMS Simulations of the New York Bight Region for LaTTE 19.3.1 Dispersal of the Plume During High River Discharge The ROMS model domain for LaTTE (Fig.€ 19.1) extends from south of Delaware Bay to eastern Long Island, and from the New Jersey and New York coasts to roughly the 70-m isobath. The model has 30 vertical layers and horizontal grid resolution is 1€km. In spring 2005 and 2006 the model was used to forecast circulation in the NYB in support of LaTTE field observation programs (Foti 2007). Figure€19.2 shows vis-
Fig. 19.1↜渀 The model domain (↜black line) and locations of observations used in the 4DVAR data assimilation (Sect.€19.3.4). Bathymetry of the New York Bight is in greyscale; black dash lines are model isobaths in metres; yellow star in the location of Ambrose Tower; green squares indicate the five HF radar stations
494
J. L. Wilkin et al.
Fig. 19.2↜渀 Left: Visible imagery from Ocean Colour Monitor (OCM) instrument aboard Indian IRS-P4 satellite, and MODIS instrument aboard NASA Terra satellite showing turbid waters associated with the Hudson River discharge, and vectors of surface current from HF radar (CODAR), on two days during the spring 2005 LaTTE experiment. Right: Modelled surface salinity and currents at the corresponding times
ible satellite imagery of the Hudson River plume as it enters the NYB on two days in 2005 overlaid with vectors showing surface current observed by HF-radar, and the modelled velocity and surface salinity and corresponding time—surface salinity being a proxy for the signature of the river source waters. A recirculating bulge of low salinity water is being over-run by a renewed ebb tide discharge of Hudson River estuary waters. Figure€19.3 compares satellite observed absorption at wavelength 488€nm from Oceansat-1 (a proxy for relative chlorophyll abundance and the presence ζ of river source water) with the modelled equivalent freshwater thickness δfw = −h (So − S(z))/So dz, where S is salinity, h is the water depth, and z = ζ is the sea surface. If it were possible to locally “unmix” the water column into two layers of salinities zero and So, the thickness of the fresh water layer would be δfw . This depicts the horizontal extent of freshwater dispersal more faithfully than sea surface salinity. Here we use a reference salinity Soâ•›=â•›32. Figures€ 19.2 and 19.3, and further model-data comparisons in Zhang et€ al. (2009a), indicate that fundamental features of the river plume circulation such as the across and along-shelf length scales, the extent of the freshwater bulge, veloc-
19â•… Integrating Coastal Models and Observations
495
Fig. 19.3↜渀 Top row: Modelled equivalent freshwater thickness in meters (↜left) and satellite observed absorption at wavelength 488€nm from Oceansat-1 (↜right) showing the patterns of influence of Hudson River source waters. Bottom: Observed and modelled salinity along the northernmost west-east transect indicated in the top right panel
ity patterns, and the transport pathway from the harbor to the coastal current, are similar in model and observations. Figure€19.4 shows the time evolution of simulated equivalent freshwater thickness during the spring freshet of 2005. From 1 to 7 April the river discharge exceeded 2,500€m3/s, or more than four times the annual mean, and peaked at 6,500€m3/s on 4 April. Initially, southward downwelling favourable winds drive the river plume rapidly southward along the New Jersey coast, but this flow is abruptly arrested on 4 April with the onset of northward upwelling favourable winds. This causes the river flow during peak discharge to form a large low-salinity recirculating bulge located predominantly on the northern side of the Hudson Shelf Valley. From 10 to 15 April a period of weak and variable winds associated with the sea breeze phenomenon enable the bulge to partially drain into a New Jersey coastal current. The return of upwelling winds on April 17 drives more low salinity water eastward and detaches the bulge
496
J. L. Wilkin et al.
Fig. 19.4↜渀 Modelled equivalent freshwater thickness in meters during the spring freshet of 2005 and winds observed at Ambrose Tower in the New York Bight apex
from the estuary discharge that previously fed it. In the week that follows, sustained winds further disperse the plume as the river discharge drops and the freshet ends. The influence of wind direction and strength on Hudson River plume dispersal has been considered in some detail (Choi and Wilkin 2007) using the same model but for idealized winds and freshet river discharge. Figure€19.5 contrasts the plume behaviour commencing from the same initial conditions (Fig.€19.5a) in response to winds from differing directions (Fig.€19.5d–g) sustained for 3 days. The sensitivity described for the April 2005 simulations is confirmed. Southward winds, and to a lesser extent eastward winds, favour New Jersey coastal current formation. Northward winds eliminate the buoyancy-driven coastal current, disperse the bulge eastward and drive flow along the Long Island coast. Westward winds hamper the discharge from the Hudson River estuary, leading to a build up of low salinity water in New York Harbor. In the absence of wind forcing, the low salinity bulge continues to grow in volume in agreement with the modelling and tank experiments noted in Sect.€19.1. In the LaTTE region then, winds play a crucial role in determining the fate of material transported by the Hudson River to the inner shelf. Choi and Wilkin (2007) also considered the influence of river discharge magnitude on the relative contribution of buoyancy and wind forcing to the momentum balance of the river plume. They found that relatively modest wind speeds of order 5€m/s are sufficient to overwhelm buoyancy forcing during typical non-freshet conditions. It follows then that relatively short timescale variability in river discharge and weather conditions could lead to different dispersal patterns for the freshet in any
19â•… Integrating Coastal Models and Observations
497
Fig. 19.5↜渀 Surface salinity of the Hudson River plume showing sensitivity of plume trajectory to wind during a high discharge event (3,000€m3/s)
given year, and this was indeed found to be the case in the three LaTTE field seasons (Chant et€al. 2008). In 2004, river waters were first transported southward in a modest coastal current, and then dispersed eastward in the surface Ekman layer associated with strong upwelling winds; 2005 was characterized by strong bulge formation and sea breeze activity as described above; while in 2006 unusually large river discharge fed a coastal current that flooded the New Jersey inner shelf with low salinity water, but this flow subsequently detached from the coast leading to significant across-shelf transport in the region south of the Hudson Shelf valley.
19.3.2 Shelf-Wide Transport and Dispersal Pathways The preceding studies revealed that while some processes act to trap river plume water near the apex of the NYB (i.e. the recirculating bulge, and coastal current flow reversals) others disperse it widely (i.e. fast coastal currents and offshore winddriven Ekman transport). Therefore the duration that river source waters dwell in the vicinity of the coastline can be quite variable, and questions arise as to where these waters eventually go. To examine the ultimate fate of Hudson River source waters on time scales much longer than the spring freshet, we conducted multi-year simulations using the same
498
J. L. Wilkin et al.
model configuration but with modified open boundary inflow/outflow transport conditions and meteorology forcing from NARR. The open boundary conditions were adapted to acknowledge that on inter-annual timescales the mid and outer New Jersey shelf is flushed by a southwestward alongshelf mean flow. An analysis of long term current meter observations and the mean momentum balance (Lentz 2008) indicates the depth-averaged along-shelf current is roughly proportional to water depth; this provides a convenient relationship upon which to base the time mean boundary transports to which we add the tidally varying currents. The modelled mean circulation for 2005–2006 (Zhang et€al. 2009a) is shown in Fig.€19.6. Buoyancy input from the Hudson River dominates flow in the apex of the NYB by driving the anticyclonic recirculation (a local maximum in sea surface height, SSH) associated with the low salinity bulge. This feature is sustained in the annual mean because it is the consequence not only of the spring freshet but also
Fig. 19.6↜渀 Mean SSH (sea surface height) contours (a, top), and velocity at sea surface (b, centre) and 20-m depth (c, bottom) over the 2-year period 2005–2006
19â•… Integrating Coastal Models and Observations
499
of high discharge events that can occur throughout the year. In the 3 years of the LaTTE program, the peak discharge actually occurred in July 2006 following heavy rains across all of New York State. Transport is eastward along the Long Island coast, but this current ultimately detaches from the coast and reverses in the face of the mean flow that enters from the eastern open boundary. On the mid to outer shelf the flow is to the southwest, largely parallel to isobaths, and deflected by the Hudson Shelf Valley as evidenced by the currents at 20€ m (Fig.€19.6c). The influence of the valley extends throughout the water column and affects SSH. In the very apex of NYB the flow at 20€m is toward New York Harbor, indicating that the HSV serves as a conduit for shoreward flow that is vertically mixed and entrained into the estuary outflow and bulge recirculation. Away from the coast the surface currents (Fig.€19.6b) are dominated by southward wind-driven Ekman flow. A New Jersey coastal current is not readily apparent in the annual mean. Zhang et€al. (2009a) show it is prominent in spring and fall, moderate in winter, but overwhelmed by upwelling winds in the summer. To avoid the ambiguity of reference salinity in lengthy simulations and to distinguish the Hudson River from other freshwater sources, Zhang et€ al. (2009a) introduce a passive tracer with unit concentration in the modelled Hudson River source and follow it to obtain an unambiguous measure of the dispersal pathways. Figure€19.7 shows the flux of Hudson River source water identified by its tracer signature across a set of arcs centred on the Harbor entrance. The qualitative features noted above are again evident. The New Jersey coastal current is clearly very tightly trapped against the coast, which partly explains why it is not conspicuous in Fig.€19.6a, b. Figure€19.7 quantifies the volume transports across sectors of the
Fig. 19.7↜渀 Left: Two-year averaged, vertically integrated freshwater flux (↜thick black lines) across arcs of radius 20, 40, 60, 80, 100, and 120€km (numbered 1–6) centred at the entrance to New York Harbor (↜star). Right: Freshwater transport (m3/s) across the segments of the arcs on either side of the Hudson Shelf Valley (↜gray dashed–dotted line), and across the valley itself
500
J. L. Wilkin et al.
arcs split at the HSV. In this 2-year mean, we see that river discharge is entirely to the shelf north of the HSV but that the majority of this flow subsequently crosses the valley within the general region of the recirculating bulge. Once south of the valley, the outflow is partitioned between the coastal current and a weaker but much broader across-shelf pathway guided by the south flank of the HSV. The latter current feature has been noted from HF radar surface current observations (Castelao et€al. 2008). Despite initially entering the coastal ocean along the New York coast, the Hudson River discharge is thus ultimately dispersed to the mid and outer shelf on the south side of the Hudson Shelf Valley. Biogeochemical observations during LaTTE (Moline et€al. 2008) support the notion that the coastal current is typically supplied with biogeochemically processed water that has circulated around the bulge’s perimeter rather than newly discharged water from the estuary. In an example of the type of controlled dynamical analysis one can conduct with a model, Zhang et€al. (2009a) separately withdrew individual forcing processes to examine the effect of each on the circulation. Their results are shown in Fig.€19.8, which should be compared to Fig.€19.6a, b for the full physics solution.
Fig. 19.8↜渀 Mean SSH (sea surface height) contours (↜left) and surface currents and magnitude (↜right) over the 2-year period 2005–2006 for three simulations with changes to the full physics configuration shown in Fig.€19.6. Top row: Outer shelf boundary forcing removed. Middle row: Wind stress removed. Bottom row: Bathymetry of Hudson Shelf Valley filled in
19â•… Integrating Coastal Models and Observations
501
Without the remotely forced along-shelf mean flow the bulge recirculation remains, but the across-shelf surface flow is more eastward being the result solely of Ekman transport and not combined with geostrophic southward flow. In the absence of wind forcing the bulge is more intense, in accordance with the results of Fong and Geyer (2002) who found that along-shore transport driven by wind arrests continuous growth of bulge recirculation. As in the full physics case, part of this recirculation feeds flow on the south side of the HSV, but without winds the downstream flow is largely at mid-shelf parallel to the coast and does not disperse to the outer shelf. Zhang et€al. (2009a) explored whether the Hudson Shelf Valley impacts circulation by simply removing the valley from the model bathymetry. Figure€19.8 shows that in the No Valley case the SSH signature of the bulge is substantially weakened, and surface velocity shows far more of the estuary outflow enters the NJ coastal current. In an extension of their passive tracer approach for following Hudson River waters, Zhang et€al. (2010a) employ the concept of ‘mean age’ (Deleersnijder et€al. 2001) to determine the transit time from river source to shelf ocean. If we denote the equation governing the transport of a passive tracer with concentration C by ∂C + ∇ · (uC) = ∇ · (K · ∇C) ∂t
then an ‘age concentration’ tracer α can be introduced satisfying ∂α + ∇ · (uα) = ∇ · (K · ∇α) + C ∂t
where the last term on the right causes α to increase in proportion to the concentration of river source water present. The concentration of the tracers in the river source are Câ•›=â•›1 and αâ•›=â•›0. The ‘mean age’ (Deleersnijder et€al. 2001) is given by a(x, t) = α(x, t)/C(x, t) and describes the duration it has been on average since the waters at a given position and time (x, t) entered the domain at the river source. Figure€19.9 illustrates how mean age evolves in a simulation where the river tracer release commenced on 13 March. It takes some 4–5 days for river water to reach the bulge circulation, and water on the southwest side of the bulge is clearly older than water to the north. On March 18 an increase in river discharge a few days previously introduces a surge of younger water that forms a sharp gradient in mean age across the western edge of bulge. In 7 days none of the river water has escaped the bulge. In regions the passive tracer has not reached the mean age is undefined. Zhang et€ al. (2010a) show that mean age patterns in the 2005 LaTTE period mimic an age proxy determined from a ratio of satellite observed water leaving radiance that expresses the relative concentration of CDOM (Coloured Dissolved Organic Matter) to phytoplankton. CDOM is the dominant optical constituent in river source waters and has high absorption at 490€nm but it subsequently photodegrades whereas phytoplankton concentration (with chlorophyll-a spectral peak at 670€nm) increases as the plume ages, so the CDOM decrease and phytoplankton
Fig. 19.9↜渀 Modelled mean age (colour scale in days) for a simulation commencing on 13-March
502 J. L. Wilkin et al.
19â•… Integrating Coastal Models and Observations
503
increase produces a spectral shift in the remote sensing reflectance. Zhang et€ al. (2010a) found a robust empirical relationship between simulated age and observed reflectance ratio that has promise for estimating river water age in the NYB—a property of relevance to rates of biogeochemical transformation of river source organic matter and pollutants (Moline et€al. 2008).
19.3.3 Data Assimilation and Observing System Design The NYB is among the most densely observed coastal oceans in the world, having been the target of pioneering deployments of new observing instruments including a cabled observatory (Glenn and Schofield 2003), surface current measuring highfrequency radar (CODAR) (Kohut et€al. 2006) and autonomous underwater vehicles (gliders) (Schofield et€al. 2007). To these systems and regular satellite imagery, LaTTE added moorings, surface drifters, and towed undulating CTD instruments deployed from the research vessels Cape Hatteras and Oceanus. These data and the sustained operation of much of the instrumentation make the NYB an attractive location to explore the integration of observation and modelling capabilities through advanced data assimilation. The locations of LaTTE 2006 in situ observations are shown in Fig.€19.1. CODAR coverage was near complete from Long Island to Delaware Bay and out to the 40€m isobath, with some gaps in the apex of NYB. There were satellite SST data from approximately four passes each day, cloudiness permitting. Here we use data assimilation (DA) for state estimation; namely, to obtain an analysis for initializing subsequent forecasts so as to enhance short-term forecast skill. This approach is common practise in Numerical Weather Prediction (NWP). We use a 4-dimensional (time-dependent) variational (4DVAR) method for DA, which is one among many possible approaches but again one that draws on experience in advanced NWP. We use the so-called Incremental Strong Constraint (IS4DVAR) formulation (Courtier et€ al. 1994) whose implementation in ROMS is described in detail elsewhere (Broquet et€al. 2009; Powell et€al. 2008; Zhang et€al. 2010b). IS4DVAR minimizes a cost function expressing the mismatch between observations and the model state at each observation location and time, summed over an analysis interval. Our implementation uses a 3-day interval—short enough for the linearization assumption of the incremental formulation to hold, but long enough for the model physics (embodied in the adjoint and tangent linear models) to exert the strong constraint interconnection (covariance) of model state variables. The control variables of the DA are the initial conditions to each 3-day analysis, with the intervals overlapped so as to generate initial conditions each day to launch a new 72-h forecast. IS4DVAR does not explicitly allow for model error as would, for example, representer-based or weak constraint 4DVAR (Bennett 2002; Courtier 1997). Errors in model physics, numerics, meteorological forcing and boundary conditions are incorporated into the model background error covariance. The observations are assigned error variances appropriate to the observation source.
504
J. L. Wilkin et al.
Fig. 19.10↜渀 Added skill introduced by data assimilation for analysis and forecast periods for individual forecast variables. Results are ensemble average of 60 forecast cycles. Vertical bars on symbols indicate 95% confidence intervals. Vertical dashed lines denote the boundary between analysis window and forecast window
Our reanalysis was conducted after the data were gathered, but we describe a DA and forecast system that could have operated in real-time because glider and vessel data are telemetered to shore. Lessons learned from this study on practical issues of data timeliness, quality control, and configuration of the IS4DVAR algorithm on a broad, shallow shelf, with significant tides have been incorporated in the Experimental System for Predicting Shelf and Slope Optics (ESPreSSO1) that currently runs operationally for the Mid-Atlantic Bight and encompasses the LaTTE domain. The value that DA adds to the forecast system can be evaluated by considering how well observations are forecast prior to their assimilation on later analysis cycles. We quantify this with a DA skill metric S = 1 − (RMS after DA /RMS before DA )
where RMS is the root-mean-square of model-observation mismatch weighted by observational error. For 60 days of simulation spanning LaTTE 2006 we have multiple 1-day, 2-day, etc. forecasts that may be combined into ensemble estimates for increasing forecast window. Figure€ 19.10 shows the skill for different variables 1╇
ESPreSSO results may be viewed at www.myroms.org/applications/espresso.
19â•… Integrating Coastal Models and Observations
505
when all available data are assimilated (black lines), and when selected data categories are withdrawn from the analysis step (coloured lines). Forecast times less than zero are in the analysis interval, and show the ability of the system to match observations and model prior to launching the forecast. As forecast time proceeds the skill declines, but note that Sâ•›=â•›0 does not say the model has no utility at all, merely that assimilation no longer adds any advantage to the model predictive skill. For temperature, DA adds skill to the forecast out to some 10–15 days, for salinity 5–10 days, and for velocity about 2–3 days. The more rapid decline in skill for velocity compared to tracers reflects the shorter autocorrelation timescales for velocity and that it is inherently less predictable. Not surprisingly, withdrawing data diminishes skill for that variable, i.e. without HF-radar data the velocity skill falls, and without satellite SST the temperature skill falls. However, there can be a modest increase in skill for other variables, e.g. salinity forecast skill is slightly higher when SST are not assimilated. We interpret this as the DA system not needing to reconcile glider and satellite temperatures and having rather more freedom to adjust initial salinity to improve the salinity analysis; recall that all the variables are dynamically linked through the strong constraint of the adjoint and tangent linear models. Overall, skill is best when all data are included, and therefore diversity in the data sources is to be preferred. Details of the ROMS IS4DVAR configuration for LaTTE with respect to background error covariance and the pre-processing of observations are discussed by Zhang et€ al. (2010b), who also examine surface versus subsurface skill, and the influence of errors in surface forcing on system performance. A further application of variational methods in ocean modelling is adjoint sensitivity analysis, which allows some inference of observation locations that are likely to have greater impact on the DA analysis. Studies using adjoint sensitivity in coastal oceanography are still relatively few compared to meteorology and mesoscale and gyre-scale oceanography, but Moore et€al. (2009) examine how upwelling, eddy kinetic energy and baroclinic instability in the California Current are affected by surface forcing on seasonal timescales. Here we present some results due to Zhang et€al. (2009b) who use the adjoint of the LaTTE model to reveal the spatial and temporal distribution of ocean model state variables that are “dynamically upstream” to features of coastal circulation. A characteristic of New Jersey coastal ocean dynamics is that significant SST variability is driven by along-shore winds (Chant 2001; Münchow and Chant 2000). Zhang et€al. (2009b) considered this process by introducing a scalar function that expresses SST anomaly variance averaged over a localized area adjacent to the coast t2 1 2 J = (Ts − T s ) dAdt, 2(t2 − t1 )A t1 A where Ts is SST and T s is its temporal mean; this definition considers temperature anomaly within an area A during a set time interval. Here, the time period is chosen to be the last three hours of the simulation time window. Defining J in quadratic form prevents the cancellation of positive and negative anomalies.
506
J. L. Wilkin et al.
Temperature, salinity and velocity outside region A affect J through transport (advection and diffusion) and dynamics (baroclinic pressure gradients, stratification, turbulent mixing). Denoting the 4-dimensional ocean state (↜T, S, u, v, ζ ) by a vector Φ, it can be shown that ∂J /∂—representing the dependence of J on the ocean state—is the solution of the ROMS adjoint model integrated backward in time and forced by ∂J /∂T computed from the forward model. See Zhang et€al. (2009b) for details. Although J is a scalar, ∂J /∂ has the same dimension as Φ, i.e. the entire ocean state through time, which emphasizes that all the surrounding ocean can potentially project on to SST variance in A. This adjoint sensitivity concept can be grasped, qualitatively, from an example: Fig.€19.11 maps the sensitivity of J to surface temperature, i.e. ∂J /∂T at zâ•›=â•›0, over the 3 days that precede the interval t1 to t2 over which J is defined, for the cases of downwelling and upwelling winds. The sequence proceeds backwards in time from day 3 to day 0. We have already demonstrated that southward (downwelling) winds favour coastal current formation, and for this case (Fig.€19.11, top row) the adjoint sensitivity advances from region A (delineated by the black box) back along the trajectory of the coastal current to New York Harbor. In the upwelling wind case (Fig.€ 19.11, bottom row), surface temperatures in preceding times have very little impact of SST variance in A. This is because the coastal current is not dynamically upstream in this situation; rather, surface temperatures depend more on source waters drawn from below the surface. The final panel on the right shows ∂J /∂T at tâ•›=â•›0 along a vertical cross-section slightly south of region A, and confirms that J is sensitive to remote subsurface temperatures during upwelling. While these results have a ready qualitative interpretation, adjoint sensitivity quantifies the dependence and immediately indicates where “upstream” is. Zhang et€al. (2009b) further quantify the relative importance of other state variables by contrasting the magnitude of ∂J /∂T with ∂J /∂S, ∂J /∂u, etc. One can immediately see the potential for this information to assist observing system operation. By identifying the timing and location of ocean conditions having significant influence on the subsequent evolution of specific circulation features (characterized by some chosen J), adjoint sensitivity indicates where, when and what observations are likely to have greater impact in a 4DVAR assimilation system. In a companion paper, Zhang et€al. (2010c) extend this approach using socalled representers, also based on variational methods, to examine the information content of a set of observations such as might be gathered routinely on a repeat transect occupied by an autonomous vehicle, or by a sustained cabled observatory.
19.4╅Processes and Dynamics for Further Study 19.4.1 Air-Sea and Wave-Current Interaction The results described above all utilize essentially the same model configuration options emphasized in Sect.€19.2, but the LaTTE program identified roles for some
Fig. 19.11↜渀 Sensitivity of J to surface temperature at different times during the 3-day period. Top row: Southward down-welling winds. Bottom row: Northward upwelling winds. Panel at right shows sensitivity at day 0 (upwelling case) on a vertical section. (See the text for discussion)
19â•… Integrating Coastal Models and Observations 507
508
J. L. Wilkin et al.
dynamical processes that were not incorporated in the model physics employed here that are worthy of incorporation in future model-based studies. In the NYB, sea-land-breeze system (SLBS) activity can be pronounced during spring (Hunter et€ al. 2007, 2010) when ocean temperatures are still cool but the land is warming. Since this is precisely the time of year when river discharge peaks with the spring freshet, atmosphere-ocean interactions fundamental to SLBS dynamics are likely important to achieving realistic simulations of the plume circulation. Furthermore, mid-summer SLBS activity further south on the Jersey Shore is influenced by SST changes associated with wind-driven coastal upwelling (Bowers 2004). Full synchronous coupling of ROMS with an atmospheric forecast model has the potential to improve both ocean and atmosphere forecasts when SLBS conditions occur, and this capability has been added to ROMS by coupling to the COAMPS (Coupled Ocean Atmosphere Prediction System) (Warner et€al. 2008b) and WRF (Weather Research and Forecasting) models. Surface wind waves mediate air-sea interaction by modifying drag and hence net momentum exchange, plus surface wave radiation stress, Stokes drift and wavecurrent interaction processes in the bottom boundary layer drag are important in the ocean momentum balance itself. It was noted in Sect.€19.2.4 that these dynamical processes are now incorporated in ROMS, including the option to synchronously couple with the SWAN wave model. Studies of the Hudson plume that employed higher resolution than the 1€km grid used here and placed greater emphasis on processes in shallow waters near the coast (inside the 15-m isobath) or at the leading edge of the plume, may demonstrate that inclusion of these dynamics are important to faithful simulation of the plume evolution.
19.4.2 Ecosystem-Optics and Heating Interaction Like most coastal ocean models, ROMS assumes constant absorption coefficients for shortwave radiation (Paulson and Simpson 1977) leading to a vertical exponential decay in internal solar heating. But optical properties of coastal waters can be far from spatially uniform, and observations during LaTTE exhibited distinct regions of turbid water associated with the river plume, motivating Cahill et€al. (2008) to use the EcoSim model (Sect.€19.2.4) to examine coupling between shortwave radiation attenuation, buoyancy and photosynthesis. The solar heating parameterization was modified to make shortwave absorption dependent on the concentration of river source freshwater as a proxy for increased attenuation in the plume. The feedback between solar heating and vertical stratification was sufficient to modify the buoyancy driven circulation and mixed layer depth. This in turn raised concentrations of chlorophyll, detritus and coloured dissolved organic matter (CDOM) in the upper water column increasing attenuation of photosynthetically active radiation (PAR) and further impacting phytoplankton growth. Simulations with full ecosystem-absorption-heating feedback (i.e. spectrally resolved 3-dimensional radiative absorption determined by optically active con-
19â•… Integrating Coastal Models and Observations
509
stituents in the water column) have shown differences in simulated temperature can be as much as 2°C warmer at the surface, and correspondingly cooler some 10€m deeper, in the Hudson River plume. The associated changes in plume trajectory and ecosystem dynamics alter net export of particulate matter to mid shelf waters. Incorporating these optical properties into the 4-dimensional ocean state is a natural future step to enhance data assimilation in coastal ocean models.
19.5â•…Summary We have described a series of model-based studies of circulation in the New York Bight region that utilize data from a sustained coastal ocean observing system complemented by extensive in situ observations from the LaTTE project. Observations are used to evaluate the performance of traditional forward simulations where the model formulation is treated as an initial and boundary value problem. Circulation on the New Jersey inner shelf, and especially within the NYB, is strongly locally driven and direct forward simulations with ROMS are quite skilful—a result we attribute to the model being comprehensive and accurate in the suite of dynamical processes it represents and the numerical algorithms it employs, suitably configured in terms of bathymetric and coastline detail, and driven by meteorological, hydrological and tidal forcing with sufficient resolution and accuracy. Using forward model simulations we have seen that the NYB circulation is particularly responsive to wind forcing, how buoyancy dynamics contribute to the retention of river source waters in the NYB apex through formation of a persistent anti-cyclonic recirculation, and that the model can be used to quantify this residence time by incorporating an age tracer. Long simulations reveal the pathways by which Hudson River borne material is ultimately dispersed across the New Jersey shelf. Moving beyond traditional forward simulations, we have illustrated how coastal models are now being increasingly integrated with the growing network of regional coastal ocean observing systems. The creation of variational complements to the ROMS nonlinear forward model (i.e. the ROMS adjoint and tangent linear models) has enabled the implementation of 4-dimensional variational data assimilation in coastal ocean analysis with an attendant improvement in forecast skill. Variationalbased methods have further capabilities beyond data assimilation, through helping inform adaptive sampling strategies and observing system design targeted at improving predictive skill.
References Avicola G, Huq P (2003) The role of outflow geometry in the formation of the recirculating bulge region in coastal buoyant outflows. J Mar Res 61:411–434 Bennett AF (2002) Inverse modeling of the ocean and atmosphere. Cambridge University Press, Cambridge, p€234
510
J. L. Wilkin et al.
Bissett WP, Walsh JJ, Dieterle DA, Carder KL (1999) Carbon cycling in the upper waters of the Sargasso sea: I. Numerical simulation of differential carbon and nitrogen fluxes. Deep Sea Res Part I Oceanogr Res Pap 46:205–269 Booij N, Ris RC, Holthuijsen LH (1999) A third-generation wave model for coastal regions. Part I: model description and validation. J Geophys Res 104(C4):7649–7666 Bowers L (2004) The effect of sea surface temperature on sea breeze dynamics along the coast of New Jersey. M.S. thesis, Rutgers University, New Brunswick Broquet G, Edwards CA, Moore AM, Powell BS, Veneziani M, Doyle JD (2009) Application of 4D-variational data assimilation to the California current system. Dyn Atmos Oceans 48:69–92 Cahill B, Schofield O, Chant R, Wilkin J, Hunter E, Glenn S, Bissett P (2008) Dynamics of turbid buoyant plumes and the feedbacks on near-shore biogeochemistry and physics. Geophys Res Lett 35, L10605. doi:10.1029/2008GL033595 Castelao RM, Schofield O, Glenn S, Chant RJ, Kohut J (2008) Cross-shelf transport of fresh water on the New Jersey shelf. J Geophys Res 113, C07017. doi:10.1029/2007JC004241 Chant RJ (2001) Evolution of near-inertial waves during an upwelling event on the New Jersey inner shelf. J Phys Oceanogr 31:746–764 Chant RJ, Wilkin J, Zhang W, Choi B-J, Hunter E, Castelao R, Glenn S, Jurisa J, Schofield O, Houghton R, Kohut J, Frazer TK, Moline MA (2008) Dispersal of the Hudson River plume in the New York Bight: synthesis of observational and numerical studies during LaTTE. Oceanography 21(4):148–161 Chapman DC (1985) Numerical treatment of cross-shelf open boundaries in a barotropic ocean model. J Phys Oceanogr 15:1060–1075 Choi B-J, Wilkin JL (2007) The effect of wind on the dispersal of the Hudson River plume. J Phys Oceanogr 37:1878–1897 Courtier P, Thépaut J-N, Hollingsworth A (1994) A strategy for operational implementation of 4DVAR using an incremental approach. Q J R Meteorol Soc 120:1367–1388 Courtier P (1997) Dual formulation of four-dimensional variational assimilation. Q J R Meteorol Soc 123:2449–2461 Deleersnijder E, Campin J-M, Delhez EJM (2001) The concept of age in marine modelling: I. Theory and preliminary model results. J Mar Syst 28:229–267 Durski S, Glenn SM, Haidvogel D (2004) Vertical mixing schemes in the coastal ocean: comparison of the level 2.5 Mellor-Yamada scheme with an enhanced version of the K-profile parameterization. J Geophys Res 109, C01015. doi:10.1029/2002JC001702 Fairall CW, Bradley EF, Hare JE, Grachev AA, Edson J (2003) Bulk parameterization of air–sea fluxes: updates and verification for the COARE algorithm. J Climate 16:571–591 Flather RA (1976) A tidal model of the northwest European continental shelf. Memoires Soc R Sci Liege Ser 6(10):141–164 Fong DA, Geyer WR (2002) The alongshore transport of freshwater in a surface-trapped river plume. J Phys Oceanogr 32:957–972 Foti G (2007) The Hudson River plume: utilizing an ocean model and field observations to predict and analyze physical processes that affect the freshwater transport. M.S. Thesis, Rutgers University, New Brunswick Glenn SM, Schofield O (2003) Observing the oceans from the COOLroom: our history, experience, and opinions. Oceanography 16:37–52 Haidvogel D, Arango H, Budgell W, Cornuelle B, Curchitser E, Di Lorenzo E, Fennel K, Geyer WR, Hermann A, Lanerolle L, Levin J, McWilliams JC, Miller A, Moore AM, Powell TM, Shchepetkin AF, Sherwood C, Signell R, Warner JC, Wilkin J (2008) Ocean forecasting in terrain-following coordinates: formulation and skill assessment of the regional ocean modeling system. J Comput Phys 227:3595–3624 Higdon RL, de Szoeke RA (1997) Barotropic-baroclinic time splitting for ocean circulation modeling. J Comput Phys 135:31–53 Hill AE (1998) Buoyancy effects in coastal and shelf seas. In: Robinson AR, Brink KH (eds) The sea. The global coastal ocean, vol€10. Harvard University Press, London, pp€21–62
19â•… Integrating Coastal Models and Observations
511
Horner-Devine AR, Fong DA, Monismith SG, Maxworthy T (2006) Laboratory experiments simulating a coastal river outflow. J Fluid Mech 555:203–232 Hunter E, Chant R, Bowers L, Glenn S, Kohut J (2007) Spatial and temporal variability of diurnal wind forcing in the coastal ocean. Geophys Res Lett 34, L03607. doi:10.1029/2006GL028945 Hunter E, Chant R, Wilkin J, Kohut J (2010) High-frequency forcing and sub-tidal response of the Hudson River plume. J Geophys Res 115, C07012. doi:10.1029/2009JC005620 Janjic ZL (2004) The NCEP WRF core. 20th Conference on Weather Analysis and Forecasting/16th Conference on Numerical Weather Prediction, Seattle. Am Meteorol Soc. http://ams.confex. com/ams/84Annual/techprogram/paper_70036.htm Kohut JT, Roarty HJ, Glenn SM (2006) Characterizing observed environmental variability with HF doppler radar surface current mappers and acoustic doppler current profilers: environmental variability in the coastal ocean. IEEE J Ocean Eng 31:876–884 Large WG, McWilliams JC, Doney SC (1994) A review and model with a nonlocal boundary layer parameterization. Rev Geophys 32:363–403 Lentz SJ (2008) Observations and a model of the mean circulation over the Middle Atlantic Bight continental shelf. J Phys Oceanogr 38:1203–1221 Mellor GL, Yamada T (1982) Development of a turbulence closure model for geophysical fluid problems. Rev Geophys Sp Phys 20:851–875 Mesinger F, DiMego G, Kalnay E, Mitchell K, Shafran P, Ebisuzaki W, Jovic D, Woollen J, Rogers E, Berbery E, Ek M, Fan Y, Grumbine R, Higgins W, Li H, Lin Y, Manikin G, Parrish D, Shi W (2006) North American regional reanalysis. B Am Meteorol Soc 87:343–360 Moline MA, Frazer TK, Chant R, Glenn S, Jacoby CA, Reinfelder JR, Yost J, Zhou M, Schofield O (2008) Biological responses in a dynamic Buoyant River plume. Oceanography 21(4):70–89 Moore AM, Arango HG, Di Lorenzo E, Miller AJ, Cornuelle BD (2009) An adjoint sensitivity analysis of the southern California Current circulation and ecosystem. J Phys Oceanogr 39:702–720 Mukai AY, Westerink JJ, Luettich RA, Mark D (2002) Eastcoast 2001, a tidal constituent database for the western North Atlantic, Gulf of Mexico and Caribbean Sea. Tech. Rep. ERDC/CHL TR-02-24, p€196 Münchow A, Chant RJ (2000) Kinematics of inner shelf motion during the summer stratified season off New Jersey. J Phys Oceanogr 30:247–268 Nof D, Pichevin T (2001) The ballooning of outflows. J Phys Oceanogr 31:3045–3058 Paulson CA, Simpson JJ (1977) Irradiance measurements in the upper ocean. J Phys Oceanogr 7:952–956 Powell TM, Lewis CVW, Curchister EN, Haidvogel DB, Hermann AJ, Dobbins EL (2006) Results from a three-dimensional, nested biological-physical model of the California Current system and comparisons with statistics from satellite imagery. J Geophys Res 111, C07018. doi:10.1029/2004JC002506 Powell BS, Arango HG, Moore AM, Di Lorenzo E, Milliff RF, Foley D (2008) 4DVAR data assimilation in the Intra-Americas sea with the regional ocean modeling system (ROMS). Ocean Model 25:173–188 Schofield O, Bosch J, Glenn SM, Kirkpatrick G, Kerfoot J, Moline MA, Oliver M, Bissett P (2007) Bio-optics in integrated ocean observing networks: potential for studying harmful algal blooms. In: Babin M, Roesler C, Cullen JJ (eds) Real time coastal observing systems for ecosystem dynamics and harmful algal blooms. UNESCO, Valencia, pp€85–108 Shchepetkin A, McWilliams J (2005) The regional oceanic modeling system (ROMS): a split-explicit, free-surface, topography-following-coordinate oceanic model. Ocean Model 9:347–404 Shchepetkin A, McWilliams J (2009a) Computational kernel algorithms for fine-scale, multi-process, long-term oceanic simulations. In: Temam R, Tribbia J (Guest eds) Computational methods for the ocean and the atmosphere. In: Ciarlet PG (ed) Handbook of numerical analysis, vol€14. Elsevier, Amsterdam, pp€119–182. doi:10.1016/S1570-8659(08)01202-0 Shchepetkin A, McWilliams J (2009b) Correction and commentary for ocean forecasting in terrain-following coordinates: formulation and skill assessment of the regional ocean modeling
512
J. L. Wilkin et al.
system. J Comp Phys 228:8985–9000 (by Haidvogel et al., J Comp Phys 227:3595–3624). doi:10.1016/j.jcp.2009.09.002 Smolarkiewicz PK (1984) A fully multidimensional positive-definite advection transport algorithm with small implicit diffusion. J Comput Phys 54:325–362 Umlauf L, Burchard H (2003) A generic length-scale equation for geophysical turbulence models. J Mar Res 61:235–265 Warner J, Sherwood C, Arango H, Signell R (2005) Performance of four turbulence closure models implemented using a generic length scale method. Ocean Model 8:81–113 Warner JC, Sherwood CR, Signell RP, Harris CK, Arango HG (2008a) Development of a threedimensional, regional, coupled wave, current, and sediment-transport model. Comput Geosci 34:1284–1306. doi:10.1016/j.cageo.2008.02.012 Warner JC, Perlin N, Skyllingstad ED (2008b) Using the Model Coupling Toolkit to couple earth system models. Environ Model Softw 23:1240–1249 Wijesekera HW, Allen JS, Newberger PA (2003) Modeling study of turbulent mixing over the continental shelf: comparison of turbulent closure schemes. J Geophys Res 108(C3):3103 Yankovsky AE, Chapman DC (1997) A simple theory for the fate of buoyant coastal discharges. J Phys Oceanogr 27:1386–1401 Zhang W, Wilkin J, Chant R (2009a) Modeling the pathways and mean dynamics of river plume dispersal in New York Bight. J Phys Oceanogr 39:1167–1183. doi:10.1175/2008JPO4082.1 Zhang W, Wilkin J, Levin J, Arango H (2009b) An adjoint sensitivity study of buoyancy- and wind-driven circulation on the New Jersey inner shelf. J Phys Oceanogr 39:1652–1668. doi:10.1175/2009JPO4050.1 Zhang W, Wilkin J, Schofield O (2010a) Simulation of water age and residence time in the New York Bight. J Phys Oceanogr. doi:10.1175/2009JPO4249.1 Zhang W, Wilkin J, Arango H (2010b) Towards an integrated observation and modeling system in the New York Bight using variational methods. Part I: 4DVAR data assimilation. Ocean Model 35:119–133. doi:10.1016/j.ocemod.2010.08.003 Zhang W, Wilkin J, Levin J (2010c) Towards an integrated observation and modeling system in the New York Bight using variational methods. Part II: representer-based observing system design. Ocean Model 35:134–145. doi:10.1016/j.ocemod.2010.06.006
Chapter 20
Seasonal and Decadal Prediction Oscar Alves, Debra Hudson, Magdalena Balmaseda and Li Shi
Abstract╇ Dynamical seasonal prediction has grown rapidly over the last decade or so. At present, a number of operational centres issue routine seasonal forecasts produced with coupled ocean-atmosphere models. These require real-time knowledge of the state of the global ocean since the potential for climate predictability at seasonal time scales resides mostly in information provided by the ocean initial conditions, in particular the upper thermal structure. The primary aim of the coupled model is to predict sea surface temperature variability and how this variability impacts regional climate through large scale teleconnections. This paper reviews recent advances in dynamical seasonal prediction using coupled ocean-atmosphere models. It discusses the sources of predictability at seasonal time scales, the probabilistic nature of seasonal forecasts, the ensemble methods used to deal with it, and the current levels of skill. The ocean initialisation receives special focus, with a discussion on initialisation strategies, ocean data assimilation methods, and the role of the observing system in seasonal forecast skill. Assimilation of observations into an ocean model forced by prescribed atmospheric fluxes is the most common practice for initialisation of the ocean component of a coupled model. Assimilation of ocean data reduces the uncertainty in the ocean estimation arising from the uncertainty in the forcing fluxes and from model errors. Although data assimilation also usually improves the skill of seasonal forecasts, its impact is often overshadowed by errors in the coupled models. The paper also briefly discusses decadal prediction, for which there is growing demand, particularly in the context of climate change adaptation. Although decadal prediction is still in its infancy, recent development shows promising results, highlighting the role of ocean initial conditions. The initialisation of the ocean for decadal predictions is a major challenge for the next decade.
O. Alves () Bureau of Meteorology, Centre for Australian Weather and Climate Research (CAWCR), GPO Box 1289, Melbourne, VIC 3001, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_20, ©Â€Springer Science+Business Media B.V. 2011
513
514
O. Alves et al.
20.1╅Introduction Dynamical seasonal prediction has grown rapidly over the last decade or so. At present, multiple operational centres routinely issue seasonal forecasts produced with coupled ocean-atmosphere models (e.g., Fig.€ 20.1). The basis of dynamical seasonal prediction resides in variability driven by slow-processes in the climate system, particularly the ocean. The El Nino Southern Oscillation (ENSO) is the most prominent mode of climate variability on seasonal to interannual timescales and is the major source of predictability. The success of dynamical seasonal prediction is therefore often related to the ability to initialise and forecast ENSO, as well as capturing its teleconnections to regional climates. This paper focuses on dynamical seasonal prediction with coupled ocean-atmosphere models. Early efforts with dynamical prediction used atmosphere-only general circulation models, but today most operational centres use fully coupled ocean-atmosphere general circulation models. The starting point for dynamical seasonal prediction is specifying the initial state of the climate system. Seasonal prediction is generally viewed as an ocean initial Model Forecasts of ENSO from Apr 2010 3
Dynamical Model: NASA GMAO NCEP CFS JMA SCRIPPS LDEO AUS/POAMA ECMWF UKMO KMA SNU ESSIC ICM ECHAM/MOM COLA ANOM MetFRANCE JPN-FRCGC COLA CCSM3
2.5 2
NINO3.4 SST Anomaly(ºC)
1.5 1 0.5 0 –0.5
Statistical Model: CPC MRKOV CDC LIM CPC CA CPC CCA CSU CLIPR UBC NNET UCLA-TCD
–1 –1.5 –2 OBS FORECAST –2.5 JFM Mar MAM AMJ MJJ
JJA
JAS
ASO SON OND NDJ DJF
2010
Fig. 20.1↜渀 Sample forecasts of El Nino produced by international models and assembled by the IRI. (http://iri.columbia.edu/climate/ENSO/currentinfo/SST_table.html)
20â•… Seasonal and Decadal Prediction
515
condition problem, but there are also benefits from realistic atmosphere (e.g., Hudson et€al. 2010) and land (e.g., Koster et€al. 2010) initial conditions. Data assimilation can improve forecasts by correcting the model state and/or variability, but it can also create problems such as initialisation shock. Recent studies examining the impact of ocean and atmosphere initialisation on seasonal forecast skill concluded that the most skilful initialisation scheme is that which makes the most use of the observed data, even though initial imbalances in the coupled state are generated (Balmaseda and Anderson 2009; Hudson et€al. 2010). To date, initialisation of the ocean and atmosphere is done separately, although there are emerging attempts at approaching initialisation as a coupled ocean-atmosphere problem, where the component models are well-balanced. This is not trivial, particularly given the different time scales upon which the atmosphere and ocean operate. Seasonal prediction is inherently uncertain and needs to be addressed in a probabilistic framework. Dynamical seasonal prediction aims to address these uncertainties and the chaotic nature of the atmosphere by producing an ensemble of forecasts. Perturbations to the initial state or model formulation generate forecasts that diverge, producing a range of possible future outcomes from which probabilistic forecasts can be produced. Ideally, generation of the ensemble should take into account uncertainties in the initial conditions (e.g., Vialard et€ al. 2005), as well as uncertainties associated with imperfect models (e.g., Murphy et€al. 2004; Berner et€al. 2008). New ocean assimilation schemes represent the uncertainty in the ocean state by producing an ensemble of ocean initial conditions (Balmaseda et€al. 2008; Yin et€al. 2011). Coupled models are far from perfect and drift with forecast lead time towards the biased coupled model climate. A common approach is to remove the drift aposteriori (e.g., Stockdale 1997). A set of retrospective forecasts (or hindcasts) is produced to provide an estimate of how the model climatology changes with lead time, and this is then used for a-posteriori calibration of the forecast results. Ideally the hindcasts should span as long a period as possible, but in practice most centres only produce hindcasts over a 15–30 year period. The hindcasts are also needed for skill assessment of the seasonal forecast system. Implicit in the production of a set of retrospective forecasts is the need for ocean initial conditions spanning the chosen hindcast period, equivalent to an ocean “reanalysis” of the historical data stream. The interannual variability represented by the ocean reanalysis (particularly due to changes in the ocean observing system) will have an impact on both forecast calibration and the assessment of skill. This paper provides a review of dynamical seasonal prediction, with a focus on the initialisation of seasonal forecasts. Section€20.2 describes the primary drivers of seasonal prediction skill, Sect.€20.3 summarises current levels of skill and Sect.€20.4 provides some background behind ensemble prediction.€ Sects. 20.5 and 20.6 focus on data assimilation and initialisation and in particular the role of ocean observations. Section€20.7 provides an example of seasonal prediction in the Australian context. Section€20.8 introduces decadal prediction, which relies heavily on the ocean initialisation. Finally, a summary is provided in Sect.€20.9. Four recent review papers provide additional, more detailed reading on the topic of this chapter:
516
O. Alves et al.
one documenting the current status of seasonal prediction and our understanding of seasonal to interannual climate variability (produced for the Copenhagen World Climate Conference 3; Stockdale et€al. 2010), two focussing on the initialisation of seasonal and decadal forecasts and the role of ocean observations (Balmaseda et€al. 2010a, b) and one reviewing the status of decadal prediction (Hurrell et€al. 2010).
20.2â•…Predictability: What is the Source of Seasonal Prediction Skill? Predictability is a feature of the climate system and cannot be changed or improved by forecast methodologies—it represents the theoretical upper limit of our prediction skill. This maximum level of predictability has not yet been achieved in seasonal forecasting: forecast skill is limited by model error, imperfect initialisation and the fact that not all the interactions in the climate system are currently fully resolved i.e. there may be sources of predictability that are unaccounted for (Kirtman and Pirani 2009). An understanding of climate variability and its key drivers offers insight into the processes providing predictability, as well as into how model shortcomings may be limiting forecast skill. Climate variability occurs on all timescales. Atmospheric processes tend to vary over short timescales (less than a few days) and are a source of unpredictable noise for seasonal prediction. Processes operating over longer timescales, primarily those associated with the ocean, form the basis of seasonal predictability. Apart from the ocean, other potential sources of seasonal predictability include: the longer timescales of variability of the coupled ocean-atmosphere system, sea-ice, soil conditions, snow cover and the state of the stratosphere (Stockdale et€al. 2010). ENSO is the most prominent mode of climate variability on seasonal to interannual timescales and is the major source of predictability. Although mainly associated with coupled ocean-atmosphere variations in the tropical Pacific (Walker 1923, 1924; Bjerknes 1969), the effects of ENSO can be felt globally, with teleconnections to regional temperature and precipitation in many countries (e.g., Rasmusson and Carpenter 1983; Ropelewski and Halpert 1987). For example, El Nino events are typically associated with above average rainfall in Peru and Ecuador, northern Argentina, East Africa and California, and dryer than normal conditions over Australia, southern Africa and parts of the Amazon basin. Figure€20.2 shows the sea surface temperature (SST) anomaly during December 1997, near the peak of the El Nino. This was the largest El Nino of the century, with SST anomalies peaking over 4°C in the eastern Pacific. For reviews of our understanding of ENSO and the mechanisms involved, see, for example, Neelin et€al. (1998); Philander (2004) and Chang et€al. (2006). The first successful prediction of ENSO with a simple coupled ocean-atmosphere dynamical model was produced by Zebiak and Cane (1987). Since then, increasingly complex and comprehensive coupled ocean-atmosphere models have been developed and dynamical prediction of ENSO is now commonplace in major operational centres.
20â•… Seasonal and Decadal Prediction
517
Fig. 20.2↜渀 Sea surface temperature anomalies during December 1997
Low-frequency coupled ocean-atmosphere variations in the Indian and Atlantic Oceans, although less dominant than the Pacific, can also drive temperature and precipitation anomalies on seasonal timescales across the globe (e.g., Goddard and Graham 1999; Folland et€al. 2001; Rodwell and Folland 2002; Saji and Yamagata 2003; Kushnir et€ al. 2006; Ummenhofer et€ al. 2009). The Indian Ocean Dipole (IOD) has been identified as a low frequency coupled mode of variability in the tropical Indian Ocean (Saji et€al. 1999; Webster et€al. 1999). In Fig.€20.2 an IOD event can be seen in the Indian Ocean, with negative SST anomalies in the east off the Java-Sumatra coast and positive anomalies in the west. IOD events, like the one in Fig.€20.2, are often triggered by easterly wind anomalies as a result of the atmospheric response to the development of El Nino. The IOD is much less predictable (practically and theoretically) than ENSO (e.g., Luo et€al. 2007; Wajsowicz 2007; Zhao and Hendon 2009), largely due to weaker surface-subsurface ocean coupling, strong interactions with the Australian-Asian monsoon and intraseasonal oscillations causing chaotic forcings in both the ocean and atmosphere (Zhao and Hendon 2009). Although the IOD is a measure of the difference between the western and eastern parts of the equatorial Indian Ocean, these two components are not always related and the skill of each component can be different. The lack of skill of the IOD is mainly due to a lack of skill in predicting the SST in the eastern component of the IOD. Other modes of atmospheric variability (not necessarily related to oceanic forcing) that may provide predictive skill on seasonal timescales, include the Northern Annular and Southern Annular modes (NAM and SAM), the Pacific North American (PNA) pattern and the North Atlantic Oscillation (NAO) (Stockdale et€al. 2010). The land surface is a potential source of seasonal predictability, primarily associated with soil moisture memory in the earth-atmosphere system (e.g., Fennessy and Shukla 1999; Koster and Suarez 2003; Seneviratne et€al. 2006; Koster et€al. 2004, 2010), although anomalous snow cover/amount may also be important (e.g., Fletcher et€al. 2009). The coordinated approach of the Global Land-Atmosphere Coupling Experiment (GLACE; Koster et€al. 2006, 2010), using a variety of state-of-the-art seasonal forecasting systems, has significantly improved our understanding of the role of land surface processes in seasonal prediction.
518
O. Alves et al.
There have also been suggestions that the stratosphere could make a contribution to seasonal prediction skill in the troposphere, particularly in the Northern Hemisphere (e.g., Baldwin and Dunkerton 2001; Ineson and Scaife 2008; Bell et€al. 2009; Cagnazzo and Manzini 2009). However, most contemporary seasonal prediction models have a poorly resolved stratosphere and do not give a realistic representation of stratospheric circulation (Maycock et€al. 2009). A recent study by Marshall and Scaife (2009) suggests that improving the resolution of the Quasi-Biennial Oscillation (QBO), a dominant mode of variability in the tropical stratosphere, could improve seasonal prediction of QBO-induced surface anomalies over Europe.
20.3â•…Forecast Skill As mentioned in Sect.€ 20.2, ENSO is the most predictable large-scale phenomenon on seasonal to interannual timescales, and is the major source of predictability. Successful predictions with a coupled seasonal forecast model are, therefore, often related to a model’s ability to reproduce the slow coupled dynamics of ENSO and accurately forecast its amplitude, spatial pattern and detailed temporal evolution (Wang et€al. 2008a). The skill of forecasting ENSO varies depending on the season, as well as the phase and intensity of ENSO. For example, there is usually greater skill at predicting ENSO events compared to neutral events, and predicting the growth phases of warm and cold events compared to the corresponding decaying phases (e.g., Jin et€ al. 2008). In terms of season, many seasonal forecast systems experience a decline in skill during the boreal spring, often referred to as the “spring predictability barrier”. At this time of year, SST anomalies are particularly variable and although dynamical forecast models may have reduced skill, their advantage over persistence forecasts is at a maximum (e.g., van Oldenborgh et€al. 2005; Jin et€al. 2008; Wang et€al. 2008a). Large multi-model projects, such as DEMETER (Palmer et€al. 2004), ENSEMBLES (Weisheimer et€al. 2009) and APCC/CliPAS (Wang et€al. 2008a), have provided a basis for intercomparing the skill and errors from coupled models, benchmarking seasonal prediction skill and assessing progress. Weisheimer et€al. (2009) report that results from the European ENSEMBLES project (using 5 European coupled models) have shown a significant reduction in the systematic SST errors (SST drift over the Pacific as the forecast progresses) compared to the previous generation project, DEMETER. For the NINO3 region (5°S–5°N; 150°W–90°W) the SST drift in DEMETER varied between +2°C and −7°C for up to 6 months lead, whereas the drift from the ENSEMBLES models was less than ±1.5°C (Weisheimer et€al. 2009). They conclude that since DEMETER, the coupled models have improved significantly in terms of their physical parameterisations, resolution and initialisation. They also show that although probabilistic skill scores suggested increases in SST prediction skill in the 4–6 month forecast range in the ENSEMBLES multi-model ensemble (MME) compared to the DEMETER MME, the increases were not statistically significant, suggesting that substantially better models (perhaps with a higher resolution than
20â•… Seasonal and Decadal Prediction
519
available now) are required to improve upon the current skill of forecasting tropical Pacific SSTs. As an example of current skill levels, the anomaly correlation skill in predicting NINO3.4 SST anomalies (an area average over 5°N–5°S, 170°–120°W) from an ensemble of 10 coupled seasonal forecast models (for hindcasts performed over 1980–2001) is 0.86 after 6 months of the forecast (Jin et€al. 2008). This level of skill from the MME is greater than from any single model, but at this lead time all models have skill greater than persistence and many of the models have anomaly correlation skills exceeding 0.8 (Jin et€al. 2008). Skill in predicting Indian Ocean SST anomalies is lower than over the Pacific. This is clear from Fig.€20.3 which shows the anomaly correlation skill of predicting SST anomalies at 6 months lead time from the POAMA (Predictive Ocean Atmosphere Model for Australia) seasonal forecast model. This is very typical of most seasonal forecast models. Prediction of the IOD is currently limited to about one season, with a strong boreal winter-spring predictability barrier (partly because the IOD is not well defined prior to June) (e.g., Luo et€al. 2007; Wajsowicz 2007; Zhao and Hendon 2009). In terms of tropical Atlantic SST anomalies, current seasonal prediction models show very little skill beyond one or two months of the forecast and skill is often no better than persistence (e.g., Stockdale et€al. 2006, 2011). The forecast skill of regional surface air temperature and precipitation anomalies is strongly dependent on season and region. Skill is highest in the tropics and decreases towards middle and high latitudes, and is usually higher for temperature than precipitation (e.g., Wang et€al. 2008a; Doblas-Reyes et€al. 2009). At 1-month lead there is very little skill in predicting seasonal mean temperature and precipitation anomalies over land in extra-tropical regions (e.g., Wang et€al. 2008a; DoblasReyes et€al. 2009). Those extra-tropical land regions that do exhibit some skill (e.g., southern Africa and the southern United States for precipitation in DJF) are usually a result of the models capturing the atmospheric teleconnections from ENSO. Consequently, model bias and drift in the simulation of ENSO may degrade global teleconnections to regional rainfall and temperature. For example, most models exhibit a cold bias in the central equatorial Pacific and a westward drift of maximum SST variability away from the eastern Pacific with increasing lead time (e.g., Jin et€al. 2008). In the POAMA seasonal forecast model, after about a season, these biases hinder the model’s ability to discern between different types of ENSO events (e.g., classical east Pacific versus central Pacific events) and the teleconnection between ENSO and Australian climate is adversely affected (Hendon et€al. 2009; Lim et€al. 2009).
20.4╅Ensemble Prediction: Representing Uncertainty There is considerable uncertainty inherent in seasonal predictions, some natural and some due to deficiencies in the forecasting systems. Figure€20.4 shows 90 forecasts of the onset of the 1997/1998 El Nino. Each forecast was produced using the POAMA-1 model (Alves et€al. 2003). The ensemble was generated by making
Fig. 20.3↜渀 SST anomaly correlation at 6 month lead time from POAMA-1.5 forecasts (↜left) and persistence (↜right). (From Wang et€al. 2008b)
520 O. Alves et al.
20â•… Seasonal and Decadal Prediction
521
Fig. 20.4↜渀 Forecasts NINO3.4 SST anomaly during the onset of the 1997/1998 El Nino. A 90-member ensemble, where each ensemble member is generated by applying a 0.001C random perturbation to the initial SST. (From Shi et€al. 2009)
0.001°C changes to the initial SST. These changes are physically insignificant, but because the climate system, in particular the atmosphere, is chaotic, the ensemble members can spread rapidly with time. The plot shows that while all of the forecast were for El Nino conditions, they range from a very weak El Nino with NINO3.4 SST anomalies of around 0.5°C to very strong El Nino conditions with NINO3.4 anomalies greater than 2.5°C by August. The spread in the forecasts indicates the stochastic component of the climate model, i.e. natural uncertainty and therefore the limits to predictability. In a seasonal forecast system the ensemble spread should be commensurable to the uncertainty arising from natural stochastic processes, but this is not always the case due to errors in the forecast system. For practical reasons, the uncertainty is classified into that arising from an imperfect initial state (inital conditions uncertainty) and that arising from imperfect models (model data sampling uncertainty, model parametric uncertainty, model structural uncertainty). In dynamical seasonal prediction, ensembles are used to quantify the uncertainty (e.g., Stephenson 2008; Doblas-Reyes et€al. 2009). Uncertainties in the initial conditions are taken into account by generating an ensemble from slightly different atmospheric and/or ocean
522
O. Alves et al.
analyses, where the differences are intended to reflect the uncertainty in these conditions (e.g., Vialard et€ al. 2005). Uncertainties in model formulation have been addressed using ensembles based on stochastic physics (Jin et€ al. 2007; Berner et€al. 2008), perturbed parameter (Murphy et€al. 2004; Stainforth et€al. 2005; Collins et€al. 2006) and multi-model approaches (Palmer et€al. 2004; Weisheimer et€al. 2009). Doblas-Reyes et€ al. (2009) assessed the relative merits of these three approaches using sets of seasonal and decadal hindcasts (done under the auspices of the European ENSEMBLES project; see van der Linden and Mitchell 2009). In general, they concluded that the three methods had comparable overall skill (the multi-model was slightly better for lead times up to 4 months, and the perturbed physics slightly better at longer leads). The perturbed physics and stochastic parameter methods are promising methods of sampling model uncertainty within a single model system. Probabilistic forecasts are produced from dynamical seasonal forecasting systems by using the aforementioned ensemble of forecasts. The forecasts follow different evolutions because they are produced from perturbed initial conditions or model formulations. After the first week, the ensemble spread is large and the forecast needs to be delivered and assessed in a probabilistic fashion. Good reviews of probability forecasting in a seasonal context, including basic concepts, recalibration and verification, are provided by Stephenson (2008) and Mason and Stephenson (2008). The distribution of the ensemble members should indicate uncertainty in the forecast: if the forecasts from the ensemble members differ widely, the inferred probability distribution is also wide and the forecast is uncertain, whereas if the ensemble members are in close agreement it might suggest less uncertainty. However, in practice, forecasts from dynamical seasonal forecast models tend to be overconfident, i.e. their spread is too narrow to match the range of observed outcomes, and there is often little relationship between ensemble spread and the error in the forecast. The prime reason for this is believed to be model error (Vialard et€al. 2005; Stockdale et€al. 2010). Multi-model approaches, where ensembles from different state-of-the-art models are combined, thereby implicitly averaging out some of the model errors, generally produce more skilful forecasts than do the results from a single model (Palmer et€al. 2004; Wang et€al. 2008a; Weisheimer et€al. 2009). A counter example of the limitation of the multi-model approach is provided by Balmaseda et€al. (2010b), showing that for a given SST index, the skill of a single model can be superior to that of the multi-model product. But this is not yet the case for useful atmospheric variables such as precipitation, where reliable seasonal forecasts benefit from the multi-model approach. Multi-model forecast systems are becoming increasingly apparent in operational seasonal forecasting. For example, the APEC Climate Center (APCC) produces real-time operational climate predictions based on a well-validated multi-model multi-institute ensemble system (http://www.apcc21.org) and ECMWF has collaborated with France and the United Kingdom to produce an operational multi-model seasonal forecast system known as EUROSIP (http://www.ecmwf.int/products/forecasts/seasonal/ documentation/eurosip/).
20â•… Seasonal and Decadal Prediction
523
20.5â•…Data Assimilation and Initialization Dynamical seasonal prediction is essentially an initial value problem, where predictive skill comes from information contained in the initial states of the coupled system: ocean, atmosphere, land and sea-ice. Most of the skill comes from the initial conditions of the upper ocean, particularly those associated with large scale patterns of variability such as ENSO and the IOD. Assimilation of ocean observations for ocean initialisation in seasonal forecasts has become a common practice, with several institutions around the world producing routine ocean re-analyses to initialise their operational seasonal forecasts. Table€20.1, from Balmaseda et€al. (2009), provides a summary of the ocean analyses used for initialisation of operational or quasi-operational seasonal forecast systems. In all these systems, the initialisation of the ocean and atmosphere is done separately, aiming at generating the best analyses of the atmosphere and ocean through comprehensive data assimilation schemes. The simplest way to initialise the tropical ocean is to run an ocean model forced with atmospheric fluxes and with a strong relaxation of the model SST to observations. Inter-annual variability in the tropical ocean is to a large extent driven by variability in the surface wind field. This technique would be satisfactory if errors in the forcing fields and ocean model were small. However, surface flux products and ocean models are both known to have significant errors. Assimilation of ocean observations is then used to constrain the estimation of the ocean state. In ocean assimilation, ocean sub-surface observations are ingested into an ocean model forced by prescribed atmospheric fluxes. The emphasis is on the initialisation of the upper ocean thermal structure, particularly in the tropics, where SST anomalies have a strong influence on the atmospheric circulation. Most of the initialisation systems use observed subsurface temperature (from XBT’s, TAO/TRITON/ PIRATA and Argo). Some of the more recent systems also use salinity (mainly from Table 20.1↜渀 Summary of different ocean assimilation systems used in the initialisation of operational and quasi-operational seasonal forecasts. (Based on Balmaseda et€al. 2009) MRI-JMA http://ds.data.jma.go.jp/tcc/tcc/products/elnino/index.html Multi-variate 3-dimension Variational (3D-VAR). Usui et€al. 2006 ORA-S3 (ECMWF System 3) http://www.ecmwf.int/products/forecasts/d/charts/ocean/real_time/ Multivariate Optimum Interpolation (OI). Balmaseda et€al. 2008 POAMA – PEODAS (CAWCR, Melbourne) http://poama.bom.gov.au/research/assim/index.htm Multivariate Ensemble OI. Yin et al. 2011 GODAS (NCEP) http://www.cpc.ncep.noaa.gov/products/GODAS/ 3D-VAR. Behringer 2007 MERCATOR (Meteo France) http://bulletin.mercator-ocean.fr/html/welcome_en.jsp Multivariate reduced order Kalman filter. Pham et€al. 1998 MO (MetOffice) http://www.metoffice.gov.uk/research/seasonal/ Multivariate OI. Martin et€al. 2007 GMAO ODAS-1 http://gmao.gsfc.nasa.gov/research/oceanassim/ODA_vis.php GMAO Seasonal Forecasts: http://gmao.gsfc.nasa.gov/cgi-bin/products/climateforecasts/index.cgi OI and Ensemble Kalman Filter Keppenne et€al. 2008
524
O. Alves et al.
Argo), and altimeter derived sea-level anomalies. The latter usually needs the prescription of an external Mean Dynamic Topography, which can be a problem, and is usually taken from a model integration rather than observations. In the longer term it is hoped that it can be derived indirectly from gravity missions such as GRACE and GOCE. Several studies have demonstrated the benefit of assimilating ocean data on the prediction of ENSO (e.g., Alves et€al. 2004; Dommenget et€al. 2004; Cazes-Boezio et€al. 2008; Stockdale et€al. 2011). The benefits are less clear in other areas, such as the equatorial Atlantic, where model errors are large. Balmaseda and Anderson (2009) evaluated three different initialisation strategies, each of which used different observational information. They showed that the ocean initialisation has a significant impact on the mean state, variability and skill of coupled forecasts at the seasonal time scale. They also showed that, using their model, the initialisation strategy that makes the most comprehensive use of the available observations leads to the best skill. Since ocean assimilation is important for seasonal prediction, an interesting question is: how accurate are ocean analyses from ocean assimilation systems? Figure€20.5 shows the composite El Nino evolution of heat content along the equator in the Pacific and Indian Oceans. The composite plots consist of 30 months spanning each El Nino event from −9 months (year prior to warm event), 12 months (warm event), to +9 months (year after warm event), and these are denoted as Year −1, Year 0 and Year +1 respectively. The selection criteria for El Niño/La Niña events is defined as the monthly Niño3 SST anomaly reaches or exceeds ±0.5°C for at least 5 consecutive months over the period 1982 to 2006. Composites from two state of the art international analyses are shown to illustrate how they differ and give an indication of the level of error in the analysis. The assimilation systems used to generate each analysis are quite different and so are the forcing fields used to drive the ocean model during the re-analysis phase. The composite El Nino evolution shows El Nino peaking at the end of the year with maximum heat content anomalies in eastern Pacific. At the same time there are heat content anomalies in western Pacific, forming a strong gradient between east and west Pacific, which is driven by anomalous westerly winds (not shown). Normally during the peak of El Nino there are also easterly winds in the Indian Ocean, which leads to an east-west pattern that is the reverse of the pattern in the Pacific. The composites also show the evolution of positive heat content anomalies from the western Pacific at the beginning of the year where El Nino develops, towards the eastern Pacific through the action of equatorial Kelvin waves. There is considerable agreement between the two re-analyses, likely to be due to a reasonable observing network, particularly in the Pacific with the TOGA-TAO array and this decade with Argo. The same is not true of salt content. Figure€20.6 compares the evolution of salt content along the equator for the same El Nino composite. One re-analysis shows significant salt anomalies throughout the equatorial Pacific during El Nino, while the other shows weaker anomalies. For example, the first re-analysis shows strong freshening in the central/west Pacific at the peak of El Nino of just over 0.1ppt,
–1.2
–0.9 –0.6
–0.3
0
0.3
0.6
0.9
1.2
1.5
–1.5
–1.2
–0.9 –0.6
–0.3
0
0.3
0.6
0.9
1.2
1.5
40E 60E 80E 100E 120E 140E 160E 180 160W 140W120W 100W 80W
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Composite T300 Anom (5S – 5N) for EINino GODAS
Fig. 20.5↜渀 Evolution of a composite El Nino in two different state of the art ocean re-analyses. Each plot shows the evolution of heat content anomalies (upper 300€m) along the equator for a composite El Nino. The plot covers the period from April of the year prior to El Nino developing, to September of the year after El Nino develops. The same El Nino events were included in both composites
–1.5
40E 60E 80E 100E 120E 140E 160E 180 160W 140W 120W 100W 80W
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Year +1 Year 0 Year –1
Sep
Composite T300 Anom (5S – 5N) for EINino PEODAS
20â•… Seasonal and Decadal Prediction 525
Fig. 20.6↜渀 As for Fig.€20.5 except for salt content of the upper 300€m
–0.16 –0.14 –0.12 –0.1 –0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
–0.16 –0.14 –0.12 –0.1 –0.08 –0.06 –0.04 –0.02 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16
80E 100E 120E 140E 160E 180 160W 140W 120W 100W 80W
40E 60E
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Composite S300 Anom (5S – 5N) for EINino GODAS
40E 60E 80E 100E 120E 140E 160E 180 160W 140W 120W 100W 80W
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Sep
Nov
Jan
Mar
May
Jul
Year +1 Year 0 Year –1
Sep
Composite S300 Anom (5S – 5N) for EINino PEODAS
526 O. Alves et al.
20â•… Seasonal and Decadal Prediction
527
presumably due to eastward advection of fresh water associated with the anomalous westerlies. However, the second re-analysis does not show such strong anomalies, generally less than 0.04ppt. This clearly indicates that, at least at present, there are significant differences in how state-of-the-art ocean re-analyses represent interannual variability of salinity. It has been shown (Balmaseda and Weaver 2006) that in the absence of salinity data, the assimilation of temperature observations can increase the uncertainty in the salinity field. The salinity field can influence the seasonal forecasts by influencing the barrier layer, which acts as a reservoir of warm water (above 28°C), and can be instrumental for the development of El Niño when propagated eastward by westerly winds (Fujii et€al. 2011). Interestingly both re-analyses show similar salt content patterns in the Indian Ocean. This is probably due to the lack of salinity and temperature data in the Indian Ocean, at least before Argo. Without much temperature and salinity data, the re-analyses are simply ocean simulations driven by surface forcing, which is likely to lead to similar patterns. There are three main ways of evaluating ocean analyses produced by data assimilation systems: (1) how well the analysis fits the assimilated observations, (2) how well the analysis fits independent observations and (3) whether the analysis leads to improved forecasts. Way (3) may not be a reliable method because if the models have significant errors a better initial state could potentially lead to a worse forecast. Way (1) is also not entirely satisfactory since it simply reflects how well the analysis fits the observations, which is mostly a function of the background and observation error variances. Way (2) is the most desirable way, but it can be difficult since usually all temperature and salinity observations are used in the assimilation. To date no assimilation system utilises ocean current data. This is one source of independent data, and Fig.€20.7 illustrates the use of ocean current data to evaluate different re-analyses. Figure€ 20.7 shows the correlation between re-analyses and pseudo-observed ocean surface currents (the currents are derived from altimeter data: OSCAR; Bonjean and Lagerloef 2002). Three re-analyses are assessed against the OSCAR current data. Figure€ 20.7a uses the PEODAS re-analysis (Yin et€ al. 2011) which is representative of a current generation ocean re-analysis. It makes dynamically balanced corrections to the currents based on the temperature and salinity corrections. The current corrections are based on the cross-covariances derived from a time evolving ensemble, see Yin et€al. (2011) for more details. Figure€20.7b shows the correlation from a control re-analysis, i.e. the same as PEODAS except no observations are assimilated. This is essentially an ocean model forced with reanalysis surface fluxes and will do a reasonable job of representing the inter-annual variability, at least as far as it is represented in the forcing fields. Figure€20.7c uses a re-analysis from an older generation ocean assimilation system, in this case from the POAMA-1 seasonal prediction system (Alves et€al. 2003). Typical of this generation, only temperature observations (and not salinity) are assimilated. However, corrections to currents are made based on the temperature corrections by assuming geostrophic balance, as in Burgers et€al. (2002). In all three re-analyses no altimeter data are assimilated. The figures show that the PEODAS re-analysis produces the best correlation with the observed data in both the tropical Pacific and Indian
528
O. Alves et al.
Fig. 20.7↜渀 Correlations between the zonal surface velocity from OSCAR and a PEODAS. b Control, and c POAMA-1. Note the non-linear correlation scale. (From Yin et€al. 2011)
20â•… Seasonal and Decadal Prediction
529
Oceans. Interestingly the last generation POAMA-1 system produces the worst comparison to observations, even worse than the control which uses no data. This is likely for two reasons. Firstly, salinity data are not assimilated in POAMA-1, which can lead to incorrect density profiles since density corrections are only based on temperature, which in turn can lead to wrong current increments when using the geostrophic relation. Secondly, the geostrophic relation may not be appropriate, especially for the surface current which has a significant Ekman component. While the control re-analysis does not use any observations, it does maintain a surface current that is in dynamical balance with the surface forcing and the pressure fields. These results illustrate the progress over the last decade that has led to the current state of the art in ocean data assimilation. Ensemble based data assimilation schemes, such as Ensemble Kalman Filters, provide an ensemble of analyses. The spread of the ensemble members represents the uncertainty in the estimated ocean state and the standard deviation of the ensemble spread about the ensemble mean can be considered a measure of the analyses error. Ensemble spread from the PEODAS ocean assimilation scheme (Yin et€al. 2011) is shown in Fig.€20.8. The highest spread in SST (Fig.€20.8a) occurs in the eastern equatorial Pacific and along the western boundary currents, as one might expect as these are the regions of highest variability. The highest spread in surface salinity (Fig.€ 20.8b) occurs in regions of highest rainfall, such as along the Inter-Tropical Convergence Zone, the South Pacific Convergence Zone and the high rainfall regions of the West Pacific warm pool. Figure€20.8c shows the temperature ensemble spread at depth along the equator. Maximum spread occurs along the thermocline, the region of maximum temperature variability. Maximum salinity spread (Fig.€20.8d) occurs at the surface.
20.6╅The Impact of Ocean Observations The ocean observing system has undergone major changes over the last couple of decades. In the early 1990s the TOGA-TAO array in the tropical Pacific was introduced. This allowed the heat content of the equatorial upper ocean to be monitored on a daily basis. In the early 1990s sea level measurements from satellite altimeters became routine, although not all operational ocean data assimilation systems ingest altimeter data. During the 2000s Argo floats were introduced, and this was perhaps the biggest revolution in ocean observations for climate. Large areas of the ocean that were previously unobserved were now covered with autonomous Argo floats. Figure€ 20.9a shows the temperature observation density pre Argo in the Indian Ocean. Observations were mainly taken along the main shipping lanes as part of the Ship of Opportunity Program (SOOP). Large gaps remained throughout the Indian Ocean. During the Argo period (Fig.€20.9c) the temperature distribution changed dramatically, with almost every grid square experiencing at least one observation. Perhaps the biggest impact of Argo is that it also measures salinity. For the first time there were enough salinity profiles to perform assimilation of salinity data.
Fig. 20.8↜渀 Spread of the ensemble (before assimilation) over the re-analysis period showing fields of a SST (°C). b Temperature section along the equator (°C). c Sea surface salinity (psu) and d Salinity section along the equator (psu). (From Yin et€al. 2011). Ensemble spread is calculated relative to a central analysis. (See Yin et€al. 2011, for full details)
530 O. Alves et al.
Fig. 20.9↜渀 The density of ocean sub-surface observations per 1â•›×â•›1 degree square per year. a and c are Temperature and b and d are Salinity. a and b are pre Argo and c and d are during the Argo period
20â•… Seasonal and Decadal Prediction 531
532
O. Alves et al.
25
TAO/TRITON
ARGO
20
%
15 10 5 0 –5
NINO12 NINO3 NINO34 NINO4 NINOW
WTIO
Fig. 20.10↜渀 The impact of the TAO/TRITON and Argo data on seasonal forecast skill. Bars show the relative increase in root mean square errors of the 1–7 month forecasts of monthly SST anomalies resulting from withholding TAO/TRITON and Argo data in the initialization of JMA seasonal forecasts for different ocean areas. (From Fuji et€al. 2008, where areas are defined)
Figure€20.9b shows the salinity observation density before Argo and Fig.€20.9d during Argo. The change is dramatic. Before Argo most of the Indian Ocean was unobserved. During Argo the salinity observation density is similar to that for temperature. The importance of salinity observations is discussed in Fujii et€al. (2011). The results of Usui et€al. (2006) indicate that only when salinity observations are assimilated is it possible to represent the strong meridional salinity gradient in the Western Equatorial Pacific, with low salinity waters north of the equator. Results also show that without the balance relationship between temperature and salinity it is not possible to represent the high salinity of the South Pacific Tropical Water, leading to the erosion of the vertical stratification and eventual degradation of the barrier layer. The seasonal forecast skill can also be used to evaluate the ocean observing system. Fujii et€al. (2011), evaluate the impact of the TAO/TRITON array and Argo float data on the JMA seasonal forecasting system by conducting data retention experiments. Their results (Fig.€20.10) show that TAO/TRITON data improves the forecast of SST in the eastern equatorial Pacific (NINO3, NINO4), and that Argo floats are essential observations for the prediction of the SST in tropical Pacific and Indian Oceans. Similar results have been obtained with the European Centre for Medium-range Weather Forecasts (ECMWF) seasonal forecasting system (Balmaseda et€al. 2007, 2009).
20.7â•…Seasonal Prediction in Australia The Australian Bureau of Meteorology has produced seasonal outlooks since the late 1980s. Currently a seasonal rainfall and temperature outlook for Australia is produced operationally based on statistical links between tropical SSTs and local climate (Chambers and Drosdowsky 1999). However, it is felt that statistical ap-
20â•… Seasonal and Decadal Prediction
533
Fig. 20.11↜渀 POAMA monthly GBR Index (area average SST anomalies for the red box shown in the map insert) for December 2009 to May 2010 in the official outlook issued on 1 December 2009, with the distribution by quartiles of the ensemble composed of the last 30 daily forecasts. Overlaid is the ensemble mean (↜black). The shading indicates upper and lower climatological terciles from the POAMA v1.5 hindcasts. (http://www.bom.gov.au/oceanography/oceantemp/GBR_SST.shtml)
proaches have essentially reached the limits of their predictive ability, particularly as climate change is invalidating the assumptions of stationary that is fundamental to statistical approaches. The Bureau, in collaboration with CSIRO, has been developing successive versions of a dynamical coupled modelling system called POAMA (Predictive Ocean Atmosphere Model for Australia; http://poama.bom. gov.au). The first version was implemented in Bureau operations in 2002 and generated forecasts of ENSO-related SST indices. The POAMA system was upgraded in 2007 with version 1.5 and the operational products were extended to include forecasts of SST in the equatorial Indian Ocean (Zhao and Hendon 2009). More recently the products have been extended to give warnings of potential bleaching of coral in the Great Barrier Reef in the season ahead (e.g., Fig.€20.11; Spillman and Alves 2009). POAMA-1.5 has been shown to have high skill in prediction not only of ENSO and the IOD, but also the “flavour of ENSO”, i.e. classical versus Modoki modes (Hendon et€al. 2009; Lim et€al. 2009). POAMA can skilfully predict tropical SST anomalies associated with ENSO two to three seasons in advance (Wang et€al. 2008b) and can depict the teleconnection to Australian rainfall (Lim et€al. 2009). POAMA can predict the peak phase of the occurrence of the IOD in austral spring (SON) with about four months lead time (Zhao and Hendon 2009). The most skilful season for POAMA in predicting rainfall over Australia is during spring (SON), when the relationship between ENSO and Australian rainfall is strong. Fig.€20.12 shows that the skill (proportion correct) of predicting above median rainfall is high over south-eastern Australia and better than climatology over most of
Fig. 20.12↜渀 Proportion of ensemble members correctly predicting above median rainfall with (a) POAMA at LT0 (b) POAMA at LT3 and (c) with the current operational statistical model (NCC model). The contour interval is 10%. The proportion correct greater than 60% is shaded. (From Lim et€al. 2009)
534 O. Alves et al.
20â•… Seasonal and Decadal Prediction
535
the country at lead time 0 (LT0, i.e. forecasts initialised at the start of September and verified in SON, over the period 1980–2006) (Lim et€al. 2009). This region of skill is where the teleconnection between rainfall and tropical SST is strong (Lim et€al. 2009). However, operational regional rainfall and temperature forecasts at the Bureau are still based on the statistical system rather than POAMA at this point in time. Experimental rainfall products, such as probabilities of above median rainfall, from POAMA have been shown to be more skilful than those based on the statistical system based on skill measures such as the ROC score or hit rates (e.g., Fig.€20.12), but the forecast reliability is low, i.e. the forecasts are too emphatic (over-confident) often showing probabilities in excess of 90%. Work is in progress to address this reliability issue so that POAMA rainfall can form the basis for the Bureau’s seasonal climate outlooks, including a pragmatic statistical correction and recalibration in the short term and investigating methods to increase ensemble spread in the long term. A new version, POAMA-2 has been developed with improved physics and a new ocean data assimilation system, the POAMA Ensemble Ocean Data Assimilation System (PEODAS), mentioned in Sect.€20.5. A comprehensive set of hindcasts are currently being generated and the system is due to be implemented operationally towards the end of 2010. Preliminary results show a significant increase in SST skill in the Pacific Ocean in POAMA-2 compared to POAMA-1.5. Development of the POAMA-3 system is also underway, which includes a new coupled model based on the UKMO Unified Atmospheric model and the GFDL MOM4, to be run at a higher resolution than the current system. The ocean data assimilation system is also being extended to include the atmosphere and land surface, which will result in a multivariate ensemble coupled assimilation system. The new ocean data assimilation system, PEODAS (Yin et€al. 2011), is a major new development in POAMA. The system is based on multivariate ensemble optimum interpolation (Oke et€al. 2005) where the background error covariance is calculated from an ensemble of ocean states. However, unlike Oke et€al. (2005) which uses a static ensemble, PEODAS uses a time evolving ensemble to calculate a time dependent multivariate error covariance matrix. An ensemble is run in parallel to the main analyses by perturbing the ocean model forcing about the main analysis run, using a method developed by Alves and Robert (2005). An ocean reanalysis has been conducted from 1977 to 2007, assimilating temperature and salinity observations from the ENACT/ENSEMBLE project. During the assimilation, temperature and salinity were relaxed to monthly climatology through the water column with an e-folding time scale of 2 years. The model SST was strongly nudged to the SST product from the NCEP reanalysis with a 1-day time scale. In Sect.€20.5 it was shown that the PEODAS ocean reanalysis is an improvement with respect to the previous POAMA version. Preliminary results also suggest that these improvements lead to better forecast skill of SST at seasonal time scales. For each reanalysis a set of hindcasts starting each month from 1980 to 2001 were produced. For the PEODAS reanalysis a 10-member ensemble was generated using the main PEODAS reanalysis. For the old POAMA reanalysis a 10-member ensemble was also generated, however this time by using the same ocean initial conditions
536 Fig. 20.13↜渀 NINO3.4 SST anomaly correlation skill as a function of leadtime (months). Red—POAMA-2 initialised from PEODAS, Black—POAMA-1.5 initialised using the old POAMA assimilation, black dash—persistence
O. Alves et al. Nino3.4 Corr ALL 82-06
1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0.55
0
1
2
3
4
5
6
7
8
9
10
11
(since perturbed states were not available) and taking atmospheric initial conditions six hours apart. Figure€20.13 shows the NINO3.4 forecast skill with lead time for forecasts from each set of reanalysis and based on the 10-member ensemble means. Forecasts using PEODAS initial conditions show significantly more skill than those using the old POAMA assimilation initial conditions. While the old reanalysis had a similar fit to observed temperature as the new reanalysis, the old reanalysis showed a considerably worse fit for salinity and zonal current. This result can be taken as an indication that, for the assimilation to improve forecast skill, it is important to keep the dynamical and physical balance among variables, and therefore all variables, not just those directly constrained by observations, should show consistent improvement.
20.8â•…Decadal Prediction Decadal climate prediction is very much in its infancy, but has the potential to provide information enabling better adaptation to climate change. Anthropogenic climate change signals are strongly modulated by natural climate variability, particu-
20â•… Seasonal and Decadal Prediction
537
larly variability driven by slow processes in the ocean on decadal time-scales (Hurrell et€al. 2010). There is growing evidence that, like seasonal prediction, decadal prediction is an initial-value problem, with recent results from the ENSEMBLES project (Smith et€ al. 2007; van der Linden and Mitchell 2009) showing that initialised decadal forecasts have the potential to provide improved information compared with traditional climate change projections. Decadal predictability originates primarily from changes in radiative forcing, including anthropogenic greenhouse gases and aerosols, and long-lived variations in the ocean. Examples of the latter include variations associated with the Pacific Decadal Oscillation (PDO; e.g., Mantua et€al. 1997), the Inter-decadal Pacific Oscillation (IPO; e.g., Power et€al. 1999) and the Atlantic Multidecadal Oscillation (AMO; e.g., Knight et€al. 2005). The ability to predict these long term climate variations depends therefore, in part, on accurate ocean initial conditions. However, compared to seasonal prediction, decadal prediction relies on the less well observed deeper ocean. Recent improvements in the ocean observing system, in particular the advent of Argo data, offers potential for increased skill of decadal forecasts (Balmaseda et€al. 2010a). The Argo data (available since 2003) are likely to be critical, for example, for making skilful predictions of the Atlantic Meridional Overturning Circulation (MOC) (Balmaseda et€al. 2010a). But, a major challenge for decadal prediction is how to evaluate the hindcasts and forecasts, particularly in view of sparse historical ocean observations (Balmaseda et€al. 2010a; Hurrell et€al. 2010). In addition, as a result of our short observational record, the mechanisms of decadal variations are not well understood and the representation of this variability differs considerably among models (Hurrell et€al. 2010). This means that the theoretical upper limit of our prediction skill on the decadal time scale is also not well established (Hurrell et€al. 2010). Another challenge facing decadal prediction is how to initialise the forecasts. Current systems (Smith et€al. 2007; Keenlyside et€al. 2008; Pohlmann et€al. 2009) use anomaly initialisation, rather than full initialisation, such that models are initialised with observed anomalies added to the model climate. This method is a way of dealing with model bias and reducing initialisation shock. However, the best approach for initialising decadal forecasts remains unclear (Hurrell et€al. 2010).
20.9â•…Summary Today’s sophisticated operational seasonal forecast systems rely on a number of interrelated components: data assimilation and initialisation, a coupled ocean-atmosphere general circulation model, ensemble generation and forecast calibration. The ocean plays a key role in each component. Predictive skill in seasonal forecasting comes from the initial state of the coupled system, particularly the upper ocean. Correctly initialising the important modes of seasonal and interannual variability, such as ENSO and the IOD, is vital. Real-time estimates of the ocean initial state have improved dramatically over the last two decades with improvements to the ocean observing network, especially from the TAO/TRITON array and Argo floats.
538
O. Alves et al.
However, seasonal forecasting requires an ocean reanalysis going back in time in order to initialise the retrospective forecasts required for skill assessment of the forecast system and calibration of the forecasts. The non-stationarity of the ocean observing system poses huge challenges for the initialisation and verification of seasonal, as well as decadal, hindcasts and forecasts. Results have shown that the method of ocean initialisation has a significant impact on the mean state, variability and skill of the forecasts (Balmaseda and Anderson 2009). Because of deficiencies in the coupled model, the aim of producing the best initial state, closest to observed, may not produce the best forecasts. There may be long-term effects of model spinup or initialisation shock when using observed initial conditions. Recent research suggests that the initialisation scheme that makes the most use of the observed data will produce the most skilful forecasts, even though initial imbalances in the coupled state are generated (Balmaseda and Anderson 2009). Clearly, however, the impact of the initialisation scheme is very dependent on the quality of the coupled model. Current research is addressing the prospect of “coupled assimilation”, where data assimilation for the atmosphere and ocean are done by the coupled model, leading to a well-balanced initial state. Seasonal prediction is a complex and challenging field of research and application. This paper addresses dynamical seasonal prediction using coupled oceanatmosphere models, with particular focus on data assimilation and initialisation. The delivery, value and use of seasonal forecasts have not been discussed. It is the latter that will continue to drive future advances in coupled models, data assimilation, ensemble techniques and the ocean observing system. Acknowlegements╇ The authors would like to acknowledge Eun-Pa Lim, Claire Spillman, Guomin Wang and Yonghong Yin for providing some of the figures used in this paper.
References Alves O, Robert C (2005) Tropical Pacific Ocean model error covariances from Monte Carlo simulations. Quart J Roy Meteor Soc 131:3643–3658 Alves O, Wang O, Zhong A, Smith N, Tseitkin F, Warren G, Schiller A, Godfrey JS, Meyers G (2003) POAMA: bureau of meteorology operational coupled model forecast system. National Drought Forum, Brisbane (15–16 April) Alves O, Balmaseda M, Anderson D, Stockdale T (2004) Sensitivity of dynamical seasonal forecasts to ocean initial conditions. Quart J Roy Meteor Soc 130:647–668 Baldwin MP, Dunkerton TJ (2001) Stratospheric harbingers of anomalous weather regimes. Science 294:581. doi:10.1126/science.1063315 Balmaseda MA, Weaver A (2006) Temperature, salinity, and sea-level changes: climate variability from ocean reanalyses. Paper presented at the CLIVAR/GODAE meeting on ocean synthesis evaluation, 31 August–1 September 2006, ECMWF, Reading, UK. http://www.clivar.org/organization/gsop/synthesis/groups/Items3_4.ppt. Accessed 26 May 2009 Balmaseda MA, Anderson D (2009) Impact of initialisation strategies and observations on seasonal forecast skill. Geophys Res Lett 36:L01701. doi:10.1029/2008GL035561 Balmaseda MA, Anderson DLT, Vidard A (2007) Impact of Argo on analyses of the global ocean. Geophys Res Lett 34:L16605. doi:10.1029/2007GL030452
20â•… Seasonal and Decadal Prediction
539
Balmaseda MA, Vidard A, Anderson D (2008) The ECMWF ORA-S3 ocean analysis system. Mon Wea Rev 136:3018–3034 Balmaseda MA, Alves O, Arribas A, Awaji T, Behringer D, Ferry N, Fujii Y, Lee T, Rienecker M, Rosati T, Stammer D (2009) Ocean initialisation for seasonal forecasts. Oceanography 22:154–159 Balmaseda MA, Fujii Y, Alves O et€al (2010a) Initialisation for Seasonal and Decadal Forecasts. In: Hall J, Harrison DE,Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€2, ESA Publication WPP-306, Venice, 21–25 September 2009 Balmaseda MA, Fujii Y, Alves O et€al (2010b) Role of the ocean observing system in an end-toend seasonal forecasting system. Plenary paper, OceanObs’09, Venice, 21–25 September 2009. http://www.oceanobs09.net/plenary/index.php Behringer DW (2007) The Global Ocean Data Assimilation System at NCEP. 11th symposium on integrated observing and assimilation systems for atmosphere, oceans, and land surface, AMS 87th Annual Meeting, San Antonio, pp€12 Bell CJ, Gray LJ, Charlton-Perez AJ, Scaife AA (2009) Stratospheric communication of ENSO teleconnections to european winter. J Clim 22:4083–4096 Berner J, Doblas-Reyes FJ, Palmer TN, Shutts G, Weisheimer A (2008) Impact of a quasi-stochastic cellular automaton backscatter scheme on the systematic error and seasonal prediction skill of a global climate model. Philos Trans R Soc A 366:2561–2579 Bjerknes J (1969) Atmospheric teleconnections from the equatorial Pacific. Mon Wea Rev 97:163–172 Bonjean F, Lagerloef GSE (2002) Diagnostic model and analysis of surface currents in the tropical Pacific Ocean. J Phys Oceanogr 32:2938–2954 Burgers G, Balmaseda MA, Vossepoel FC, van Oldenborgh GJ, van Leeuwen PJ (2002) Balanced Ocean-Data Assimilation near the Equator. J Phys Oceanogr 32:2509–2519 Cagnazzo C, Manzini E (2009) Impact of the stratosphere on the winter tropospheric teleconnections between ENSO and the North Atlantic and European region. J Clim 22:1223–1238 Cazes-Boezio G, Menemenlis D, Mechoso CR (2008) Impact of ECCO ocean-state estimates on the initialisation of seasonal climate forecasts. J Clim 21:1929–1947 Chambers LE, Drosdowsky W (1999) Australian seasonal rainfall prediction using near global sea surface temperatures. AMOS Bull 12(3):51–55 Chang P, Yamagata T, Schopf P, Behera SK, Carton J, Kessler WS, Meyers G, Qu T, Schott F, Shetye S, Xie S-P (2006) Climate fluctuations of tropical coupled systems—the role of ocean dynamics. J Clim 19:5122–5174 Collins M, Booth BBB, Harris GR, Murphy JM, Sexton DMH, Webb MJ (2006) Towards quantifying uncertainty in transient climate change. Clim Dyn 27:127–147 Doblas-Reyes FJ, Weisheimer A, Deque M et€al (2009) Addressing model uncertainty in seasonal and annual dynamical ensemble forecasts. Quart J R Meteor Soc 135:1538–1559 Dommenget D, Stammer D (2004) Assessing ENSO simulations and predictions using adjoint ocean state estimation. J Clim 17:4301–4315 Fennessy MJ, Shukla J (1999) Impact of initial soil wetness on seasonal atmospheric prediction. J Clim 12(11):3167–3180 Fletcher CG, Hardiman SC, Kushner PJ, Cohen J (2009) The dynamical response to snow cover perturbations in a large ensemble of atmospheric GCM integrations. J Clim 22:1208–1222 Folland CK, Colman AW, Rowell DP, Davey MK (2001) Predictability of Northeast Brazil rainfall and real-time forecast skill. J Clim 14:1937–1958 (1987–1998) Fujii Y, Matsumoto S, Kamachi M, Ishizaki S (2011) Estimation of the equatorial Pacific salinity field using ocean data assimilation system. Adv Geosci (In Press) Goddard L, Graham NE (1999) The importance of the Indian Ocean for simulating rainfall anomalies over eastern and southern Africa. J Geophys Res 104:19099–19116 Hendon HH, Lim E, Wang G, Alves O, Hudson D (2009) Prospects for predicting two flavors of El Niño. Geophys Res Lett. doi:10.1029/2009GL040100
540
O. Alves et al.
Hudson D, Alves O, Hendon HH, Wang G (2010) The impact of atmospheric initialisation on seasonal prediction of tropical Pacific SST. Clim Dyn. doi:10.1007/s00382-010-0763-9 Hurrell J, Delworth TL, Danabasoglu G et€al (2010) Decadal climate prediction: opportunities and challenges. In: Hall J, Harrison DE, Stammer D (eds) Proceedings of OceanObs’09: sustained ocean observations and information for society, vol€ 2. ESA Publication WPP-306, Venice, 21–25 September 2009 Ineson S, Scaife AA (2008) The role of the stratosphere in the European climate response to El Nino. Nat Geosci 2:32–36 Jin F-F, Lin L, Timmermann A, Zhao J (2007) Ensemblemean dynamics of the ENSO recharge oscillator under statedependent stochastic forcing. Geophys Res Lett 34:L03807. doi:10.1029/2006GL027372 Jin EK, Kinter JL III, Wang B et€ al (2008) Current status of ENSO prediction skill in coupled ocean-atmosphere models. Clim Dyn 31:647–664. doi:10.1007/s00382-008-0397-3 Keenlyside N, Latif M, Jungclaus J, Kornblueh L, Roeckner E (2008) Advancing decadal-scale climate prediction in the North Atlantic Sector. Nature 453:84–88 Keppenne CL, Rienecker MM, Jacob JP, Kovach R (2008) Error covariance modeling in the GMAO ocean ensemble kalman filter. Mon Wea Rev 136:2964–2982. doi:10.1175/2007MWR2243.1 Kirtman BP, Pirani A (2009) The state of the art of seasonal prediction: outcomes and recommendations from the first world climate research program workshop on seasonal prediction. Bull Am Meteor Soc 90:455–458 Knight JR, Allan RJ, Folland CK et€al (2005) A signature of persistent natural thermohaline circulation cycles in observed climate. Geophys Res Lett 32:L20708. doi:1029/2005GL024233 Koster RD, Suarez MJ (2003) Impact of land surface initialisation on seasonal precipitation and temperature prediction. J Hydrometeor 4:408–423 Koster RD, Suarez MJ, Liu P et€al (2004) Realistic initialisation of land surface states: impacts on subseasonal forecast skill. J Hydrometeor 5:1049–1063 Koster RD,Guo Z, Dirmeyer PA et€al (2006) GLACE: The global land-atmosphere coupling experiment. Part I: overview. J Hydrometeor 7:590–610 Koster RD, Mahanama SPP, Yamada TJ et€ al (2010) Contribution of land surface initialisation to subseasonal forecast skill: first results from a multi-model experiment. Geophys Res Lett 37:L02402. doi:10.1029/2009GL041677 Kushnir Y, Robinson WA, Chang P, Robertson AW (2006) The physical basis for predicting Atlantic sector seasonal-to-interannual climate variability. J Clim 19:5949–5970 Lim E-P, Hendon HH, Hudson H, Wang G, Alves O (2009) Dynamical forecasts of inter-El Niño variations of tropical SST and Australian spring rainfall. Mon Wea Rev 137:3796–3810 Luo JJ, Masson S, Behera S, Yamagata T (2007) Experimental forecasts of the Indian ocean dipole using a coupled OAGCM. J Clim 20:2178–2190 Mantua NM, Hare SR, Zhang Y, Wallace JM, Francis RC (1997) A Pacific interdecadal climate oscillation with impacts on salmon production. Bull Am Meteor Soc 78:1069–1079 Marshall AG, Scaife AA (2009) Impact of the QBO on surface winter climate. J Geophys Res 114:D18110. doi:10.1029/2009JD011737 Mason SJ, Stephenson D (2008) How do we know whether seasonal climate forecasts are any good? In: Troccoli A, Harrison M, Anderson DLT, Mason SJ (eds) Seasonal climate: forecasting and managing risk. NATO Science Series. Springer, Dordrecht, pp€467 Martin MJ, Hines A, Bell MJ (2007) Data assimilation in the FOAM operational short-range ocean forecasting system: a description of the scheme and its impact. Quart J R Meteor Soc 133:981–995 Maycock AC, Keeley SPE, Charlton-Perez AJ, Doblas-Reyes FJ (2009) Stratospheric circulation in seasonal forecasting models: implications for seasonal prediction. Clim Dyn. doi:10.1007/ s00382-009-0665-x Murphy JM, Sexton DMH, Barnett DN, Jones GS, Webb MJ,Collins M, Stainforth DA (2004) Quantification of modelling uncertainties in a large ensemble of climate change simulations. Nature 430:768–772
20â•… Seasonal and Decadal Prediction
541
Neelin D, Battisti DS, Hirst AC, Jin F-F, Wakata Y, Yamagata T, Zebiak S (1998) ENSO theory. J Geophys Res 103:14261–14290 Oke PR, Schiller A, Griffin DA, Brassington GB (2005) Ensemble data assimilation for an eddyresolving ocean model of the Australian region. Quart J R Meteor Soc 131:3301–3311 Oldenborgh GJ van, Balmaseda MA, Ferranti L, Stockdale TN, Anderson DLT (2005) Did the ECMWF seasonal forecast model outperform a statistical model over the last 15 years? J Clim 18:2960–2969 Palmer TN, Alessandri A, Andersen U et€al (2004) Development of a European multimodel ensemble system for seasonal-to-interannual prediction (DEMETER). Bull Am Meteor Soc 85:853–872 Pham DT, Verron J, Roubaud MC (1998) A singular evolutive extended Kalman filter for data assimilation in oceanography. J Mar Syst 16:323–340 Philander SG (2004) Our affair with El nino. Princeton University Press, Princeton, pp€275 Pohlmann H, Jungclaus J, Marotzke J, Köhl A, Stammer D (2009) Improving predictability through the initialization of a coupled climate model with global oceanic reanalysis. J Clim 22:3926–3938 Power S, Casey T, Folland C, Colman A, Mehta V (1999) Inter-decadal modulation of the impact of ENSO on Australia. Clim Dyn 15:319–324 Rasmusson EM, Carpenter TH (1983) The relationship between eastern equatorial Pacific SSTs and rainfall over India and Sri Lanka. Mon Wea Rev 111:517–528 Rodwell MJ, Folland CK (2002) Atlantic air-sea interaction and seasonal predictability. Quart J R Meteor Soc 128:1413–1443 Ropelewski CF, Halpert MS (1987) Global and Regional Scale Precipitation Patterns Associated with the El Niño/Southern Oscillation. Mon Wea Rev 115:1606–1626 Saji NH, Yamagata T (2003) Possible impacts of Indian Ocean Dipole mode events on global climate. Clim Res 25:151–169 Saji, NH, Goswami BN, Vinayachandran PN, Yamagata T (1999) A dipole mode in the tropical Indian Ocean. Nature 401:360–363 Seneviratne SI, Koster RD, Guo Z et€al (2006) Soil moisture memory in agcm simulations: analysis of global land-atmosphere coupling experiment (GLACE) data. J Hydrometeor 7:1090–1112 Shi L, Alves O, Hendon HH, Wang G, Anderson D (2009) The role of stochastic forcing in ensemble forecasts of the 1997/98 El Niño. J Clim 22:2526–2540 Smith D, Cusack S, Colman A, Folland C, Harris G, Murphy J (2007) Improved surface temperature prediction for the coming decade from a global circulation model. Science 317:796–799 Spillman CM, Alves O (2009) Dynamical seasonal prediction of summer sea surface temperatures in the Great Barrier Reef. Coral Reefs. doi:10.1007/s00338-008-0438-8 Stainforth DA, Aina T, Christensen C, Collins M, Faull N, Frame DJ, Kettleborough JA, Knight S, Martin A, Murphy JM, Piani C, Sexton D, Smith LA, Spicer RA, Thorpe AJ, Allen MR (2005) Uncertainty in predictions of the climate response to rising levels of greenhouse gases. Nature 433:403–406 Stephenson D (2008) An Introduction to Probability Forecasting. In: Troccoli A, Harrison M, Anderson DLT and Mason SJ (eds) Seasonal climate: forecasting and managing risk. NATO Science Series. Springer, Dordrecht, pp€467 Stockdale TN (1997) Coupled ocean–atmosphere forecasts in the presence of climate drift. Mon Wea Rev 125:809–818 Stockdale TN, Balmaseda MA, Vidard A (2006) Tropical Atlantic SST prediction with coupled ocean-atmosphere GCMS. J Clim 19:6047–6061 Stockdale TN, Alves O, Boer G et€al (2010) Understanding and predicting seasonal to interannual climate variability—the producer perspective. White Paper for WCC3. Draft. http://www. wcc3.org/sessions.php?session_list=WS-3 Stockdale TN, Anderson DLT, Balmaseda MA, Doblas-Reyes F, Ferranti L, Mogensen K, Palmer TN, Molteni F, Vitart F (2011). ECMWF Seasonal forecast system 3 and its prediction of sea surface temperature. Clim Dyn (In Press)
542
O. Alves et al.
Ummenhofer CC, England MH, McIntosh PC, Meyers GA, Pook MJ, Risbey JS, Gupta AS, Taschetto AS (2009) What causes southeast Australia’s worst droughts? Geophys Res Lett. doi:10.1029/2008GL036801 Usui N, Ishizaki S, Fujii Y, Tsujino H, Yasuda T, Kamachi M (2006) Meteorological research institute multivariate ocean variational estimation (MOVE) system: some early results. Adv Space Res 37:806–822 van der Linden P, Mitchell JFB (eds) (2009) ENSEMBLES: Climate change and its impacts: summary of research and results from the ENSEMBLES project. Met Office Hadley Centre, Exeter, pp€160 Vialard J, Vitart F, Balmaseda M, Stockdale T, Anderson D (2005) An ensemble generation method for seasonal forecasting with an ocean-atmosphere coupled model. Mon Wea Rev 133:441–453 Wajsowicz RC (2007) Seasonal-to-interannual forecasting of tropical Indian Ocean sea surface temperature anomalies: potential predictability and barriers. J Clim 20:3320–3343 Walker G (1923) Correlation in seasonal variations of weather VIII. A preliminary study of world weather. Mem Indian Meteorol Dept 24(4):75–131 Walker GT (1924) Correlation in seasonal variations of weather IX. Mem Indian Meteorol Dept 24(9):275–332 Wang B, Lee J-Y, Kang I-S, et€al (2008a) Advance and prospectus of seasonal prediction: assessment of the APCC/CliPAS 14-model ensemble retrospective seasonal prediction (1980–2004). Clim Dyn. doi:10.1007/s00382-008-0460-0 Wang G, Alves O, Hudson D, Hendon H, Liu G, Tseitkin F (2008b) SST skill assessment from the new POAMA-1.5 System. BMRC Res Lett 8:2–6 (Bureau of Meteorology, Australia) Webster PJ, Moore AM, Loschnigg JP, Leben RR (1999) Coupled ocean–atmosphere dynamics in the Indian Ocean during 1997–1998. Nature 401:356–360 Weisheimer A, Doblas-Reyes FJ, Palmer TN et€al (2009) ENSEMBLES: a new multi-model ensemble for seasonal-to-annual predictions—Skill and progress beyond DEMETER in forecasting tropical Pacific SSTs. Geophys Res Lett 36(21):L21711 Yin Y, Alves O, Oke PR (2011) An ensemble ocean data assimilation system for seasonal prediction. Mon Wea Rev. doi:10.1175/2010MWR3419.1 Zebiak SE, Cane MA (1987) A model El nino-southern oscillation. Mon Wea Rev 115:2262–2278 Zhao M, Hendon HH (2009) Representation and prediction of the Indian Ocean dipole in the POAMA seasonal forecast model. Quart J R Meteor Soc 135(639):337–352
Part VII
Evaluation
Chapter 21
Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example Harley E. Hurlburt, E. Joseph Metzger, James G. Richman, Eric P. Chassignet, Yann Drillet, Matthew W. Hecht, Olivier Le Galloudec, Jay F. Shriver, Xiaobiao Xu and Luis Zamudio
Abstract╇ The Gulf Stream is the focus of an effort aimed at dynamical understanding and evaluation of current systems simulated by eddy-resolving Ocean General Circulation Models (OGCMs), including examples with and without data assimilation and results from four OGCMs (HYCOM, MICOM, NEMO, and POP), the first two including Lagrangian isopycnal coordinates in the vertical and the last two using fixed depths. The Gulf Stream has been challenging to simulate and understand. While different non-assimilative models have at times simulated a realistic Gulf Stream pathway, the simulations are very sensitive to small changes, such as subgrid-scale parameterizations and parameter values. Thus it is difficult to obtain consistent results and serious flaws are often simulated upstream and downstream of Gulf Stream separation from the coast at Cape Hatteras. In realistic simulations, steering by a key abyssal current and a Gulf Stream feedback mechanism constrain the latitude of the Gulf Stream near 68.5°W. Additionally, the Gulf Stream follows a constant absolute vorticity (CAV) trajectory from Cape Hatteras to ~70°W, but without the latitudinal constraint near 68.5°W, the pathway typically develops a northern or southern bias. A shallow bias in the southward abyssal flow of the Atlantic Meridional Overturning Circulation (AMOC) creates a serious problem in many simulations because it results in abyssal currents along isobaths too shallow to feed into the key abyssal current or other abyssal currents that provide a similar pathway constraint. Pathways with a southern bias are driven by a combination of abyssal currents crossing under the Gulf Stream near the separation point and the increased opportunity for strong flow instabilities along the more southern route. The associated eddy-driven mean abyssal currents constrain the mean pathway to the east. Due to sloping topography, flow instabilities are inhibited along the more northern routes west of ~69°W, especially for pathways with a northern bias. The northern bias occurs when the abyssal current steering constraint needed for a realistic pathway is missing or too weak and the simulation succumbs to the demands of linear dynamics for an overshoot pathway. Both the wind forcing and the upper ocean branch of H. E. Hurlburt () Oceanography Division, Naval Research Laboratory Stennis Space Center, Mississippi, MS, USA e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_21, © Springer Science+Business Media B.V. (outside the USA) 2011
545
546
H. E. Hurlburt et al.
the AMOC contribute to those demands. Simulations with a northern pathway bias were all forced by a wind product particularly conducive to that result and they have a strong or typical AMOC transport with a shallow bias in the southward flow. Simulations forced by the same wind product (or other wind products) that have a weak AMOC with a shallow bias in the southward limb exhibit Gulf Stream pathways with a southern bias. Data assimilation has a very positive impact on the model dynamics by increasing the strength of a previously weak AMOC and by increasing the depth range of the deep southward branch. The increased depth range of the southward branch generates more realistic abyssal currents along the continental slope. This result in combination with vortex stretching and compression generated by the data-assimilative approximation to meanders in the Gulf Stream and related eddies in the upper ocean yield a model response that simulates the Gulf Streamrelevant abyssal current features seen in historical in situ observations, including the key abyssal current near 68.5°W, a current not observed in the assimilated data set or corresponding simulations without data assimilation. In addition, the model maintains these abyssal currents in a mean of 48 14-day forecasts, but does not maintain the strength of the Gulf Stream east of the western boundary.
21.1â•…Introduction Ocean models run with atmospheric forcing but without ocean data assimilation are useful in studies of ocean model dynamics and simulation skill. Models that give realistic simulations with accurate dynamics, when run without data assimilation, are essential for eddy-resolving ocean prediction because of the multiple roles that ocean models must play in ocean nowcasting and forecasting, including dynamical interpolation during data assimilation, representing sparsely observed subsurface ocean features from the mixed layer depth to abyssal currents, converting atmospheric forcing into ocean responses, imposing topographic and geometric constraints, performing ocean forecasts, providing boundary and initial conditions to nested regional and coastal models, and providing forecast surface temperature to coupled atmosphere and sea ice models. A wide range of ocean dynamics contribute to these different roles. Here we focus on evaluating and understanding the dynamics of mid-latitude ocean currents simulated by state-of-the-art, eddy-resolving ocean general circulation models (OGCMs), using the Gulf Stream as an example. Dynamical understanding and evaluation of current systems simulated by OGCMs has been a challenge because of the complexity of the models and the current systems, a topic discussed in recent reviews by Chassignet and Marshall (2008) and Hecht and Smith (2008) in relation to the Gulf Stream and North Atlantic. In some regions greater progress has been made. Tsujino et€al. (2006) investigated the dynamics of large amplitude Kuroshio meanders south of Japan. Usui et€al. (2006) used the same model to make Kuroshio forecasts from a data-assimilative initial state, typically demonstrating 40 to 60-day forecast skill south of Japan. Usui et€al. (2008a, b) also used the model in dynamical studies of a 1993–2004 data-assimilative hindcast. Hurlburt et€al. (2008b) examined OGCM dynamics and their relation
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
547
to the underlying topography in studying mean Kuroshio meanders east of Japan and mean currents in the southern half of the Japan/East Sea. The simulations were consistent with observations and with dynamics found in purely hydrodynamic models with lower vertical resolution and vertically-compressed but otherwise realistic topography confined to the lowest layer. Consistent with observations (Gordon et€al. 2002), the same Japan/East Sea OGCM simulation modeled the dynamics of intrathermocline eddy formation in that region, as discussed in Hogan and Hurlburt (2006). These are dynamics that could not be simulated by the purely hydrodynamic model. Hurlburt et€al. (2008b) also investigated OGCM dynamics in simulating the Southland Current system east of South Island, New Zealand, where the topography of the Campbell Plateau and the Chatham Rise intrude well into the stratified ocean so that the design of the low vertical resolution model did not apply. In that case an alternative approach was used to investigate the dynamics. Recent observational evidence was sufficient to provide strong support for the results of the study. In dynamical evaluation of the Gulf Stream simulations by eddy-resolving global and basin-scale OGCMs, we adopt an augmented version of the approach used by Hurlburt et€al. (2008b) for OGCM simulations of the Kuroshio and Japan/East Sea. Thus we build from an explanation of Gulf Stream separation from the western boundary and its pathway to the east in Hurlburt and Hogan (2008). This explanation was derived using results from a 5-layer hydrodynamic isopycnal model with vertically-compressed but otherwise realistic topography confined to the lowest layer. It was tested versus observational evidence and theory, parts of the latter contributing directly to the explanation. In Sect.€21.2 we discuss the explanation and related 5-layer model results, theory, and observational evidence. In Sect.€21.3 we evaluate Gulf Stream dynamics in eddy-resolving OGCM simulations by the HYbrid Coordinate Ocean Model (HYCOM) (Bleck 2002), the Miami Isopycnic Coordinate Ocean Model (MICOM) (Bleck and Smith 1990), the Nucleus for European Modelling of the Ocean (NEMO) (Madec 2008), as used in the French Mercator ocean prediction effort, and the Parallel Ocean Program (POP) (Smith et€al. 2000). Both simulations with a realistic Gulf Stream and those with a variety of unrealistic features are assessed and specific deficiencies are identified. In Sect.€21.4 we assess the impacts of data assimilation on variables relevant to Gulf Stream dynamics that are sparsely observed, in some cases not observed at all in real time. Are realistic model dynamics maintained in data-assimilative models? Are unrealistic dynamics improved? What are the impacts of dynamics on Gulf Stream forecast skill?
21.2â•…Dynamics of Gulf Stream Boundary Separation and Its Pathway to the East 21.2.1 Linear Model Simulation of the Gulf Stream As an initial step, we examine a linear equivalent barotropic solution with the same wind forcing and upper ocean transport for the Atlantic meridional overturning cir-
548
H. E. Hurlburt et al.
culation (AMOC) as the nonlinear solutions discussed in Sect.€ 21.2. The model boundary is located at the shelf break and the resolution is comparable to that used in nonlinear solutions discussed later in this chapter. The spun up mean solution has a Sverdrup (1947) interior, Munk (1950) western boundary currents and is consistent with the Godfrey (1989) island rule, except that, unlike Munk (1950), the solution is obtained by running a numerical model with horizontal friction applied everywhere. Figure€21.1 depicts the mass transport streamfunction from a 1/16º l.5 layer linear reduced-gravity simulation (with the lower layer infinitely deep and at rest) forced by the smoothed Hellerman and Rosenstein (1983) wind stress climatology plus the northward upper ocean flow of a 14€Sv AMOC. In comparison to the overlaid mean IR northwall pathway that lies along the northern edge of the Gulf Stream, the linear solution gives two unrealistic pathways, a broad one centered near the observed separation latitude (35.5ºN) that extends eastward and a second one with nearly the same transport extending northward along the western boundary. The eastward pathway is wind-driven (~22€Sv) and the northward pathway has a 14€Sv AMOC component plus an 8€Sv wind-driven component, but both pathways contribute to a situation where ~31€Sv out of 44€Sv (~70%) separate from the western boundary north of the observed separation latitude. From Fig.€21.1 it is easy to appreciate the challenge of simulating an accurate nonlinear Gulf Stream pathway 45 N 40 N 35 N 30 N 25 N 20 N 15 N 10 N 90 W
80 W
70 W
60 W
50 W
40 W
30 W
20 W
10 W
0
Fig. 21.1↜渀 Mean transport streamfunction (Ψ) from a 1/16°, 1.5-layer linear reduced-gravity simulation forced by the smoothed Hellerman and Rosenstein (1983) wind stress climatology and the northward upper ocean flow (14€Sv) of the Atlantic meridional overturning circulation (AMOC), forcing used for all of the simulations in Sect.€21.2. The contour interval is 2€Sv. A 15-year mean (1982–1996) Gulf Stream IR northwall pathwayâ•›±1σ by Cornillon and Sirkes (unpublished) is overlaid. This pathway has 0.1° longitudinal resolution and is based on an average of 674 data points per 0.1° increment between 76° and 55°W. An earlier analysis of this frontal pathway and its variability (based on data from 1982–1989) is discussed in Lee and Cornillon (1996). The streamfunction shown here covers the 9–47°N model domain used by all the nonlinear simulations discussed in Sect.€21.2. (From Hurlburt and Hogan 2008, as adapted from Townsend et€al. 2000)
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
549
in an ocean model. See Townsend et€al. (2000) for linear solutions from 11 different wind stress climatologies.
21.2.2 I mpacts of the Eddy-Driven Abyssal Circulation and the Deep Western Boundary Current (DWBC) on Gulf Stream Boundary Separation and Its Pathway to the East It has been a popular theory, proposed by Thompson and Schmitz (1989), that the DWBC affects Gulf Stream separation from the western boundary as it passes underneath. To investigate this hypothesis Hurlburt and Hogan (2008) used a nonlinear 5-layer hydrodynamic isopycnal model covering the same domain shown in Fig.€21.1. They also used monthly climatological wind forcing and included a 14€Sv AMOC, the latter via inflow and outflow ports in the northern and southern boundaries. Figure€21.2 depicts the mean sea surface height (SSH) from six simulations.
DWBC 44 N 42 N
– 64 – 48 – 32 – 16 0
16 32 48
1/16º Cb =.002
(cm)
NO DWBC
1/16º Cb =.002
40 N 38 N 36 N 34 N 44 N 42 N
a
b
c
d
1/32º Cb =.002
1/32º Cb =.002
40 N 38 N 36 N 34 N 44 N 42 N
1/32º Cb =.02
1/32º Cb =.02
40 N 38 N 36 N 34 N
e
80 W 75 W
70 W
65 W
60 W
55 W
50 W
45 W
40 W
f
75 W
70 W
65 W
60 W
55 W
50 W
45 W 40 W
Fig. 21.2↜渀 Mean SSH from six 5-layer Atlantic Ocean simulations (9–47°N) zoomed into the Gulf Stream region between Cape Hatteras and the Grand Banks. The simulations depicted in a, c and e include a DWBC while those in b, d and f do not. a and b Depict results from 1/16° simulations. c–f From corresponding 1/32° simulations. a–d With a coefficient of quadratic bottom friction, Cbâ•›=â•›0.002. d and f with a 10× increase to Cbâ•›=â•›0.02. The northward upper ocean flow of the AMOC is included in all six simulations. The Laplacian coefficient of isopycnal eddy viscosity is Aâ•›=â•›20 (10) m2/s for the 1/16º (1/32°) simulations. The SSH contour interval is 8€cm. The mean Gulf Stream IR northwall pathway ±1σ by Cornillon and Sirkes is overlaid on each panel. For more information about the simulations used in Sect.€21.2, see Hurlburt and Hogan (2008). (From Hurlburt and Hogan 2008)
550
H. E. Hurlburt et al.
The northward upper ocean component of the AMOC resides in the top 4 layers and is always included, while the DWBC residing in the abyssal layer is included in the simulations in the left column of Fig.€21.2 and turned off in the simulations in the right column. Since the model is purely hydrodynamic, the DWBC can be turned off without altering the watermass characteristics. In the three rows of Fig.€21.2 the model resolution is varied in tandem with the horizontal friction and in the bottom row the bottom friction is increased 10-fold to damp the eddy-driven abyssal circulation. East of 68ºW all of the simulations give similar, generally-realistic Gulf Stream pathways, except near 50ºW, where the simulations with a DWBC exhibit two mean pathways (inner and outer meanders) at the location of the Gulf Stream transition to the North Atlantic Current as it rounds the southern tip of the Grand Banks, a phenomenon discussed dynamically in Hurlburt and Hogan (2008). All three of the simulations with a DWBC and one of the simulations without it exhibit a realistic mean Gulf Stream pathway west of 68ºW, but the other two simulations without a DWBC exhibit pathways that overshoot the observed separation latitude in accord with the constraint of linear theory on the flow. These results indicate an abyssal current impact on the pathway west of 68ºW. To investigate the impacts of abyssal currents on the Gulf Stream pathway, we use a two-layer theory for abyssal current steering of upper ocean current pathways (Hurlburt and Thompson 1980; Hurlburt et€al. 1996, 2008b). In a two-layer model with no diapycnal mixing, the continuity equation for layer 1 is
hlt + v1 · ∇h1 + h1 ∇ · v1 = 0,
(21.1)
where h1 is upper layer thickness, t is the time derivative and vi is the velocity in layer i. The geostrophic component of the advective term in (21.1) can be related to the geostrophic velocity (vig) in layer 2 by
v1g · ∇h1 = v2g · ∇h1 ,
(21.2)
k × f (v1g − v2g ) = −g ∇h1 ,
(21.3)
because from geostrophy,
v1gâ•›–â•›v2g is parallel to contours of h1. In (21.3) k is a unit vector in the vertical, fâ•›=â•›2ωsinθ is the Coriolis parameter, ω is the Earth’s rotation rate, θ is latitude, g′â•›=â•›g(↜ρ2â•›−â•›ρ1)/ρ2 is the reduced gravity due to buoyancy, g is the gravitational acceleration of the Earth, and ρi is the water density in layer i. Since geostrophy is typically a very good approximation outside the equatorial wave guide and normally near-surface currents are much stronger than abyssal currents, then usually |v1|››|v2|, making h1 a good measure of v1 under these conditions. From the preceding we see that abyssal currents can advect upper layer thickness gradients and therefore the pathways of upper ocean currents. Abyssal current advection of upper ocean current pathways is strengthened when strong abyssal currents intersect upper ocean currents at nearly right angles, but often the end result of this advection is near barotropy because the advection is reduced as v1 and v2 become more nearly parallel (or antiparallel). This theory has proven useful in understanding the dynamics of ocean models with higher vertical resolution, when all of the following conditions are satisfied:
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
551
(a) the flow is nearly geostrophically balanced, (b) the barotropic and first baroclinic modes are dominant, and (c) the topography does not intrude significantly into the stratified ocean. Additionally, the interpretation in terms of near-surface currents applies when |vnear sfc|››|vabyssal|. Note the theory does not apply at low latitudes because of (a) and (b), but should be useful in large parts of the stratified ocean, even where current systems are relatively weak, as seen in the well-stratified southern half of the Japan/East Sea (Hurlburt et€al. 2008b). While abyssal currents driven by any means can steer upper ocean current pathways, baroclinic or mixed barotropicbaroclinic instability is an important source of abyssal currents because baroclinic instability is very effective in transferring energy from the upper to abyssal ocean. These eddy-driven abyssal currents are constrained to follow the geostrophic contours of the topography and in turn can steer the pathways of upper ocean currents, including their mean pathways. This upper ocean—topographic coupling via flow instabilities requires that the physics of baroclinic instability be very well resolved in order to obtain sufficient downward transfer of energy. As a result, this type of coupling is a key criterion in distinguishing between eddy-resolving and eddypermitting ocean simulations, in regions where it occurs (Hurlburt et€ al. 2008b). Results from this model and ocean models discussed in Sect.€21.3 indicate that the upper ocean—topographic coupling requires the first baroclinic Rossby radius of deformation be resolved by at least 6 grid intervals and even higher resolution is required for realistic eastward penetration of inertial jets. This coupling also highlights the need for eddy-resolving ocean models in ocean prediction systems and in climate prediction models, as discussed in Hurlburt et€al. (2008a, 2009). Based on the preceding discussion, we look in Fig.€21.3 for abyssal currents west of ~68ºW that may advect the simulated Gulf Stream pathways in Fig.€21.2. We start with the simulation shown in Figs.€21.2c and 21.3c because it has 1/32º resolution, the standard bottom friction, and a DWBC. In that simulation abyssal currents pass under the Gulf Stream near 68.5ºW, 72ºW, and the western boundary, all generally southward. The abyssal currents near 68.5ºW and 72ºW cross under at large angles and could clearly advect the Gulf Stream pathway, but the abyssal current adjacent to the western boundary is nearly antiparallel as it crosses under the Gulf Stream, a point noted by Pickart (1994) based on observations, and thus has a weak steering effect on the Gulf Stream pathway. The corresponding simulation without a DWBC (Figs.€ 21.2d and 21.3d) has nearly the same Gulf Stream pathway with an even stronger abyssal current crossing under it near 68.5ºW. The two other simulations without a DWBC have only a weak mean abyssal current crossing under it at this longitude (<3€cm/s), while all of the simulations with realistic Gulf Stream separation have a more robust abyssal current passing under the Gulf Stream near 68.5ºW (>4€cm/s). None of the simulations without a DWBC have an abyssal current crossing under near 72ºW, while all of the simulations with a DWBC have one fed by two branches from the north side. The 1/32º simulations with a DWBC and standard (Figs.€21.2c and 21.3c) or high bottom friction (Figs.€21.2e and 21.3e) have nearly the same Gulf Stream pathway between the western boundary and 68ºW, but in the simulation with high bottom friction the abyssal currents crossing under the Gulf Stream near 72ºW are extremely weak. Thus, the abyssal current crossing under the Gulf Stream near 68.5ºW is clearly the one that is essential for the model’s simula-
e
b
a
75 W
70 W
65 W
60 W
55 W
50W
45 W
40 W
f
d
b
75 W
70 W
65 W
60 W
55 W
50 W
45 W
40 W
Fig. 21.3↜渀 Same simulations as Fig.€21.2 but depicting mean abyssal currents (arrows) overlaid on isotachs (in cm/s). The DWBC is most easily seen paralleling the northern model boundary north of 41°N between 65 and 51°W in panels (a, c, e). In the simulations with no DWBC (panels b, d, f) that current is not present. (From Hurlburt and Hogan 2008)
80 W
34 N
36 N
38 N
40 N
42 N
44 N
34 N
36 N
38 N
40 N
42 N
44 N
34 N
36 N
38 N
40 N
42 N
44 N
552 H. E. Hurlburt et al.
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
553
tion of a realistic Gulf Stream pathway between the western boundary and 68ºW. Further, the DWBC is not necessary for simulation of a realistic Gulf Stream pathway, but it augments the key abyssal current sufficiently for that to occur in the two simulations with the weaker eddy-driven abyssal circulations. The 1/32º simulation with standard bottom friction and a DWBC (Fig.€21.3c) is used in a zoom of the mean abyssal currents with the addition of topographic contours (Fig.€21.4a). The plotted contours are for the vertically-uncompressed (real) topography to facilitate comparisons between model and observed abyssal currents in relation to topographic features. Figure€21.4b depicts mean abyssal currents and uncompressed topography from a corresponding 1/8º eddy-permitting simulation over a larger region with the zoom region in Fig.€21.4a marked with a box. It should be noted that eddy-resolving and eddy-permitting OGCMs with higher vertical resolution and thermodynamics are typically characterized by their equatorial resolution, whereas the simulations in Sect.€21.2 are characterized by mid-latitude resolution. Thus, the corresponding equatorial resolution of the simulations in Fig.€ 21.4a, b would be 1/24 and 1/6º, respectively. Unlike the 1/32º simulation (Fig.€21.4a), the abyssal circulation in the 1/8º model is dominated by the DWBC, which crosses under the observed location of the Gulf Stream near 72ºW, and the eddy-driven abyssal circulation is extremely weak (Fig.€21.4b). In particular, the 1/8º model does not simulate the key abyssal current near 68.5ºW. The DWBC augments this current in two of the simulations (Fig.€21.3a, e) because the DWBC and the eddy-driven abyssal circulation interact and become intertwined in the eddy-resolving simulations. The surface circulation in the 1/8º model is basically a wiggly version of the linear solution (Hurlburt and Hogan 2000, their Fig.€ 4a), who also present numerous model-data comparisons for the 1/16º simulation in Figs.€21.2a and 21.3a and the 1/32º simulation in Figs.€21.2c and 21.3c. In addition to the abyssal current adjacent to the western boundary, abyssal currents are seen crossing under the Gulf Stream via three different pathways centered over different isobaths between the western boundary and 68ºW. North of the Gulf Stream these pathways are centered over the 4,200, 3,700 and 3,100€m isobaths, the first crossing under near 68.5ºW, the other two crossing under in a confluence near 72ºW. All three abyssal currents cross isobaths to deeper depths while passing under the Gulf Stream. They do this to conserve potential vorticity in relation to the downward north to south slope of the base of the thermocline in accord with the theory of Hogg and Stommel (1985). The two currents over deeper isobaths retroflect toward the east and then take a variety of simple to complex pathways into the ocean interior (complex even in the mean, e.g. Fig.€21.3c). Ultimately all of these pathways emerge from the interior as a single strong abyssal current along a gentle escarpment. That current exits Fig.€ 21.4a near 72ºW and rejoins the DWBC along the continental slope near 33ºN. In contrast, the branch centered over the 3,100€m isobath north of the stream continues along the continental slope south of the stream (centered above the 3,700€m isobath). Each cross-under pathway is influenced by specific features of the topography and each also flows along one side of an associated eddy-driven abyssal gyre centered directly beneath the Gulf Stream. These gyres are located in regions where the slopes of the topography and the base of the thermocline are
554 41 N
H. E. Hurlburt et al.
a 1/32°
40 N
39 N
38 N
37 N
36 N
35 N
34 N 76 W 44 N
74 W
72 W
70 W
68 W
66 W
b
42 N 1/8° 40 N 38 N 36 N 34 N 32 N 30 N
80 W
75 W 5 cm/s
70 W 0
1
2
65 W 3
4
60 W 5
6
7
55 W 8
9
10
50 W 11
12
45 W
(cm/s)
Fig. 21.4↜渀 a Zoom of Fig.€21.3c with (full amplitude, uncompressed) depth contours (in m) overlaid to facilitate geographical co-location in the model-data comparisons. b Same as a but covering a larger region from a corresponding 1/8° simulation overlaid with a box outlining the region covered by a. In the 1/8º simulation Aâ•›=â•›100€m2/s and Cbâ•›=â•›0.002. (From Hurlburt and Hogan 2008)
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
555
matched closely enough to create regions of quite uniform potential vorticity for abyssal currents, as shown in Hurlburt and Hogan (2008). The shallowest and westernmost gyre is anticyclonic, while the two associated with eastward retroflections into the interior are cyclonic, all three in accord with the sign of the relative vorticity generated due to topographic constraints on the pathways of the associated abyssal currents as they cross under the Gulf Stream (shown in Hurlburt and Hogan 2008).
21.2.3 O bservational Evidence of Abyssal Currents in the Gulf Stream Region Figure€21.5 (bottom) (from Johns et€al. 1995) presents observational evidence for the key abyssal current crossing under the Gulf Stream near 68.5ºW, including current speeds similar to the model, currents crossing isobaths to deeper depths beneath the Gulf Stream, and a closed cyclonic circulation. Additionally, the currents above the shallowest isobaths within the observational array flow along isobaths that would feed into the retroflecting abyssal current that crosses under the Gulf Stream near 72ºW. Figure€21.6 (from Pickart and Watts 1990) provides a composite of historical abyssal current measurements 100–300€ m above the bottom. It provides striking evidence of the complete cyclonic abyssal gyre centered near 37ºN, 71ºW with current speeds similar to the model. Another salient observation is the ~12.5€cm/s west-southwestward current near 34.5ºN, 71.1ºW that corroborates the strong abyssal current along the gentle escarpment in Fig.€21.4a (10.5€cm/s at the same location in the model). Like the model (Fig.€ 21.4a), the observation-based abyssal current schematic of Schmitz and McCartney (1993, their Fig.€ 12a) depicts a retroflecting abyssal current pathway that later rejoins the DWBC, in addition to a pathway that continues along the continental slope. These two pathways are also consistent with Range and Fixing of Sound (RAFOS) float trajectories at 3,500€m depth discussed in Bower and Hunt (2000). RAFOS floats that crossed under the Gulf Stream west of ~71ºW continued generally southward along a deeper isobath of the continental slope, while floats crossing under east of ~71ºW retroflected into the interior, most of them taking complex eddying trajectories, but of the six retroflecting trajectories shown in Bower and Hunt (2000, their Fig. 7), the one that crossed under at the location of the key abyssal current (near 69ºW) (their Fig. 7j) took an eddying trajectory en route to a small amplitude double retroflection, first to the east (at 36.7ºN, 70.1ºW) and then to the west (at 36.0ºN, 68.4ºW) before rapidly following a nearly straight-line trajectory along the gentle escarpment, an overall trajectory in good agreement with the model mean in Fig.€21.4a and one that provides additional evidence for the strong eddy-driven abyssal current along the gentle escarpment (seen in the southern part of Fig.€21.4a). This abyssal current (also seen in Fig.€21.6) is completely absent in the 1/8º eddy-permitting simulation (Fig.€21.4b), as are the observed cyclonic abyssal gyre centered near 37ºN, 71ºW (Fig.€21.6) and the abyssal current observed crossing under the Gulf Stream between 68º and 69ºW (Fig.€21.5).
40° N
1000 m 2000
400 m 39° N
I1
H2
38° N
HC
G2
3500
Latitude
I2
H3
3000
H4
4000
G3
4100
37° N
I3 M13 H5
I4
4200 4300
I5
H6
CS (2852 m)
4400
HF
36° N
4500 4600
4700 4800 4900 5000
50 cm/s
5100
40° N
N
1000 m 2000
3500 m 39° N
I1
H2
38° N
G2
Latitude
3500 4000
I3
H4
M13 I4
G3
4100
37° N
I2
H3
3000
H5
4200
I5
4300
H6
4400
36° N
4500 4600
N
4700 4800 4900 5000
35° N 72° W
5 cm/s
5100
70° W
68° W
66° W
Longitude
Fig. 21.5↜渀 Mean current meter velocities at 400€ m (↜top) and 3,500€ m (↜bottom) over the entire deployment, June 1988–August 1990. All of the vectors represent 26-month means except at sites H5 and M13, which are approximately 1-year means. (From Johns et€al. 1995)
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example 77°
75°
73°
71°
69°
67°
557 65°
40°
38°
36° CURRENTS 100 – 300 m OFF BOTTOM 0
10 cm/s
34°
Fig. 21.6↜渀 Mean current meter velocities 100–300€m above the bottom from historical measurements collected in the middle Atlantic Bight. The record lengths of the measurements vary from 4 months to 2 years, and the box associated with each vector represents the uncertainty of the mean, typically 1–2€cm/s. (From Pickart and Watts 1990)
21.2.4 G ulf Stream Separation and Pathway Dynamics, Part I: Abyssal Current Impact An eddy-driven abyssal current, the local topographic configuration, and a Gulf Stream feedback mechanism constrain the latitude of the Gulf Stream near 68.5ºW. To help illustrate the steps explaining this statement, Fig.€ 21.7 depicts the mean depth of the base of the model thermocline overlaid with the same mean abyssal currents and topographic contours as Fig.€21.4a. The results are from the same 1/32º simulation with a DWBC shown in Figs.€21.2c, 21.3c, and 21.4a. The steps in the explanation are (1) an eddy-driven abyssal current, possibly augmented by the DWBC, approaches from the northeast and advects the Gulf Stream pathway southward, i.e. prevents the overshoot pathway seen in Figs.€21.2b, f. (2) To conserve potential vorticity, the abyssal current crosses to deeper depths while passing under the Gulf Stream (Hogg and Stommel 1985), a feedback mechanism that allows the Gulf Stream to help determine its own latitude. (3) Due to the topographic configuration, the passage to deeper depths requires curvature toward the east and generation of positive relative vorticity. (4) Once the abyssal current becomes parallel to the Gulf Stream, further southward advection of the Gulf Stream
558
H. E. Hurlburt et al.
1
LQP
1
1 1
1
1
1 :
:
:
:
:
:
FPV
Fig. 21.7↜渀 Same as Fig.€21.4a but with isotachs (↜in color) replaced by the mean depth at the base of the model thermocline (↜in m) from the same simulation (↜in color), i.e. the mean depth of the interface between layer 4 and layer 5 (the abyssal layer) from the 1/32º simulation depicted in Figs.€21.2c and 21.3c. (From Hurlburt and Hogan 2008)
pathway is halted. (5) The local latitude of the Gulf Stream is determined by the northernmost latitude where the abyssal current can become parallel to the Gulf Stream. (6) Due to constraints of the local topographic configuration on this process, the resulting local Gulf Stream latitude is not very sensitive to the strength of the abyssal current, once it is sufficient to perform the advective role. However, the results of these dynamics would be sensitive to the location of abyssal currents in relation to the isobaths, the accuracy of the model in representing key topographic features, and the depth change in the base of the thermocline across the Gulf Stream. Essentially the same explanation can be applied to the effects of the abyssal current crossing under the Gulf Stream near 72ºW (when present and sufficiently strong) and to abyssal currents that develop either cyclonic or anticyclonic curvature and become either parallel or antiparallel to the Gulf Stream while crossing underneath. However, the response to the abyssal current near 72ºW is minimal as evidenced in Figs.€21.2 and 21.3 and an impact is visible only in the 1/32º simulation with a DWBC and standard bottom friction (Cbâ•›=â•›0.002) (Fig.€21.2c). In Fig.€21.2c there is a straightening of the Gulf Stream pathway over ~73−70ºW not seen in the other figure panels. This phenomenon is also evident in the overlaid mean Gulf Stream IR northwall frontal pathway and in the Gulf Stream pathway as depicted by
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
559
the 12ºC isotherm at 400€m depth, the latter shown in Watts et€al. (1995). An explanation for the slight impact of this abyssal current on this Gulf Stream simulation is discussed in the next subsection. Additionally, it should be noted that the scale of the eddy-driven mean abyssal gyres beneath the Gulf Stream is similar to the width of the stream (Fig.€21.7) and related to regions of nearly uniform potential vorticity beneath the stream (Hurlburt and Hogan 2008), where slopes of topography and the base of the thermocline are quite well matched. These gyres are not related to mean meanders in the Gulf Stream. In contrast, the Kuroshio exhibits two mean northward meanders just east of where the Kuroshio separates from the coast of Japan, meanders that are dynamically related to eddy-driven mean abyssal gyres, as discussed in Hurlburt et€al. (1996, 2008b).
21.2.5 G ulf Stream Boundary Separation as an Inertial Jet Following a Constant Absolute Vorticity (CAV) Trajectory Constraint of the Gulf Stream latitude near 68.5ºW is not a sufficient explanation of the Gulf Stream pathway between the western boundary and 69ºW. Further, the abyssal current crossing under the Gulf Stream near 72ºW demonstrated little effect on the pathway. Thus, there must be another essential contribution to Gulf Stream pathway dynamics over that longitude range. Using along-track data from four satellite altimeters, Fig.€21.8 depicts only a narrow band of high SSH variability along the Gulf Stream west of 69ºW, indicating a relatively stable pathway segment in that region. Thus, we test the relevance of a particular type of theoretical inertial jet pathway, namely a CAV trajectory (Rossby 1940; Haltiner and Martin 1957; Reid 1972; Hurlburt and Thompson 1980, 1982). In a nonlinear 1.5 layer reduced-gravity model, a CAV trajectory requires a frictionless steady free jet with the streamline at the core of the current following contours of constant SSH and layer thickness. The latter requires geostrophic balance so that conservation of potential vorticity becomes conservation of absolute vorticity along a streamline at the core of the current. Accordingly, the simulations in Fig.€21.2 were tested to see if (a) the mean path of the current core in the top layer of the model (black line in Fig.€ 21.9) overlaid an SSH contour (yellow-green line in Fig.€21.9) and (b) there was a narrow band of high SSH variability along the current core between the western boundary and 69ºW (plotted in color in Fig.€21.9). Following Reid (1972) and Hurlburt and Thompson (1980, 1982), the CAV trajectories were calculated from
cos α = cos αo + 1/2y2 /r 2 − y/γo ,
(21.4)
which is an integrated form of the differential equation that assumes the velocity at the core of the current, υc, is a constant and where râ•›=â•›(υc/β)½, β is the variation of
560
H. E. Hurlburt et al.
45 N
40 N
35 N
30 N 80 W
70 W
75 W 0
65 W 0.1
60 W
55 W
0.2 meters
50 W 0.3
45 W
40 W
35 W
0.4
Fig. 21.8↜渀 Along-track SSH variability from quasi-contemporaneous satellite altimeter data in 4 different orbits overlaid on topographic contours (depth in m), Jason-1 over the period 15 Jan. 2002–18 Oct. 2007, GFO over 15 July 1999–12 Dec. 2007, Envisat over 24 Sept. 2002–29 Oct. 2007, and Topex in an interleaved orbit over 16 Sept. 2002–8 Oct. 2005. The tracks are overlaid in the following order from top to bottom: (1) Envisat, (2) GFO, (3) Jason-1, and (4) Topex interleaved. (Provided by Gregg Jacobs, NRL). (From Hurlburt and Hogan 2008)
the Coriolis parameter with latitude, α is the angle of the current with respect to the positive x-axis on a β-plane, y is the distance of the trajectory from the x-axis, γ is the trajectory radius of curvature, and the subscript o indicates values at the origin of the trajectory calculation (here at an inflection point where γo → ∞). The amplitude (b) (here the northernmost point) of the trajectory in relation to the inflection points can be calculated from
b = 2r sin 1/2αo .
(21.5)
In order for the Gulf Stream to separate from the western boundary as a free jet following a CAV trajectory, the CAV trajectory must be initialized with a trajectory inflection (γo → ∞) located at the separation point. Since the angle of separation (↜αo) is north of due east, the CAV trajectory must subsequently develop curvature that is concave toward the south. If the simulation exhibits curvature to the north after separation, then it does not separate from the western boundary as a free jet, even through it may have one or more segments downstream that follow a CAV trajectory. The calculated CAV trajectories are overlaid as red curves on Fig.€21.9. Details of the CAV trajectory calculations can be found in Table€2 of Hurlburt and Hogan (2008). The speed at the core of the current (υc) near separation from the western boundary is 1.6–1.7€m/s in the 1/16º simulations and 1.9–2.0€m/s in the 1/32º
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example DWBC
41 N
0
2
4 6
8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40
(in cm)
1/16º 40 N Cb =.002
561
NO DWBC
1/16° Cb=.002 Cb
39 N 38 N 37 N 36 N 35 N 40 N
a
b
1/32° Cb=.002
1/32° Cb=.002
c
dd
39 N 38 N 37 N 36 N 35 N 40 N
1/32° Cb=.02
1/32° Cb=.02
39 N 38 N 37 N 36 N 35 N
e
76 W
74 W
72 W
70 W
68 W
66 W
f
76 W
74 W
72 W
70 W
68 W
66 W
Fig. 21.9↜渀 CAV trajectory analysis for Gulf Stream pathways simulated by the six simulations illustrated in Fig.€21.2. The pathway of the maximum velocity at the core of the current (↜black line), the closest SSH contour (↜yellow-green line), the corresponding CAV trajectory (↜red line with a dot at the inflection point), the observed IR northwall frontal pathway ± std. dev. (↜violet lines), and the simulated SSH variability are overlaid on each panel. Due to the hierarchy of the overlaid lines (↜light violet, red, black, yellow-green from top to bottom), lines on the bottom tend to be obscured where close agreement occurs. That is particularly the case for the yellow-green SSH contour west of ~68°W, where the core of the current overlaying a single SSH contour is a prerequisite for the existence of a CAV trajectory. The SSH contour closest to the pathway of the velocity maximum is skewed toward the north side of the model Gulf Stream as depicted in SSH and is a −24€cm. b −16€cm. c −28€cm, and d–f −24€cm. See the corresponding panels in Fig.€21.2. Near the western boundary the Gulf Stream axis from Topex/Poseidon altimetry (Lee 1997) diverges from the IR frontal pathway in accord with the model simulations of panels a, c, d, and e (see Hurlburt and Hogan 2000, their Fig.€7). (From Hurlburt and Hogan 2008)
562
H. E. Hurlburt et al.
simulations, in line with observations of 1.6–2.1€m/s reported in Halkin and Rossby (1985), Joyce et€al. (1986), Johns et€al. (1995), Schmitz (1996), and Rossby et€al. (2005). A model mean υc over 75–70ºW was used in the CAV trajectory calculations. The angle of separation is 53±3º north of due east for the simulations with a realistic pathway and the inflection points used to initialize the CAV trajectory calculations are marked by red dots on the trajectories. Between the western boundary and ~70ºW, the four simulations with a realistic Gulf Stream pathway demonstrate close agreement between the model pathway, as represented by υc, and the corresponding CAV trajectory. However, the two simulations with pathways that overshoot the latitude of the observed Gulf Stream pathway exhibit curvature to the north immediately after separation and an inflection point (red dot) located northeast of separation from the western boundary. That means they do not separate from the western boundary as a free jet, but instead indicate a strong influence from the constraints of linear dynamics (Fig.€21.1). Thus, CAV trajectory dynamics alone are not sufficient to explain the Gulf Stream pathway between the western boundary and 69ºW. However, they do explain the small impact of the abyssal current crossing under the Gulf Stream near 72ºW (Fig.€21.4a), because the abyssal current and the CAV trajectory give nearly the same Gulf Stream latitude at that location (Fig.€21.9c).
21.2.6 G ulf Stream Separation and Pathway Dynamics, Part II: Role of CAV Trajectories In the simulations with a realistic Gulf Stream, the mean pathway closely follows a CAV trajectory between its separation from the western boundary and ~70ºW. The CAV trajectory depends on (1) the angle of boundary current separation (with respect to latitude), as largely determined by the angle of the shelf break prior to separation, (2) the speed at the core of the current, and (3) an inflection point located where boundary current separation occurs.
21.2.7 G ulf Stream Separation and Pathway Dynamics, Part III: The Cooperative Interaction of Abyssal Currents and CAV Trajectories Neither abyssal currents nor CAV trajectories alone are sufficient to explain Gulf Stream separation from the western boundary and its pathway to the east. Abyssal current constraint of the Gulf Stream latitude near 68.5ºW, in conjunction with the topographic configuration and a Gulf Stream feedback mechanism, is not a sufficient explanation of the Gulf Stream pathway between the western boundary and 68ºW. Gulf Stream simulations with realistic speeds at the core of the current are not sufficiently inertial (a) to overcome the linear solution demand for an overshoot
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
563
pathway and (b) to obtain realistic separation without assistance from the abyssal current near 68.5ºW. Thus a CAV trajectory and the constraint on the latitude of the Gulf Stream near 68.5ºW work together in simulation of a realistic Gulf Stream pathway between the western boundary and 68ºW. The eddy-driven abyssal circulation is sufficient to obtain the key abyssal current, which was not simulated without it. The DWBC is not necessary, but did augment the key abyssal current and did assist the eddy-driven abyssal circulation in effecting realistic Gulf Stream separation, when the latter was not strong enough by itself. The impact of the DWBC on Gulf Stream separation was resolution dependent, required at 1/16º, but not at 1/32º resolution. Finally, the dynamical explanation is robust. As long as the speed at the core of the current was consistent with observations and the key abyssal current was sufficiently strong, the simulated Gulf Stream separation and its pathway to the east were in close agreement with observations despite differences in model resolution, bottom friction, strength of the abyssal circulation, and the presence or absence of a DWBC. Further, the explanation is consistent with a wide range of key observational evidence in the upper and abyssal ocean, including a 15-year mean Gulf Stream IR northwall pathway, the speed at the core of the current near Gulf Stream separation, the pattern of sea surface height variability from satellite altimetry, and mean abyssal currents. Hurlburt and Hogan (2000) present a large number of additional model-data comparisons for the simulations depicted in Fig.€21.2a, c.
21.3â•…Dynamical Evaluation of Gulf Stream Simulations by Eddy-Resolving Global and Basin-Scale OGCMs Significant success has been achieved in simulating the Gulf Stream pathway in eddy-resolving basin-scale OGCMs with thermodynamics and higher vertical resolution (20–50 layers or levels) than the 5 layers used in the hydrodynamic model discussed in Sect.€21.2. However, the OGCM simulations have been very sensitive to changes, such as subgrid scale parameterizations and parameter values. Thus, it has been difficult to obtain consistent results and many simulations have exhibited serious flaws (Paiva et€al. 1999; Smith et€al. 2000; Bryan et€al. 2007; Chassignet and Marshall 2008; Hecht and Smith 2008; Hecht et€al. 2008). In this section we perform a dynamical evaluation of eddy-resolving global and basin-scale OGCM simulations of Gulf Stream separation and its pathway to the east. The immediate goals are to better identify and understand the sources of success and failure, and in Sect.€21.4 the impacts of data assimilation. So far, eddy-resolving global and basinscale ocean prediction systems have demonstrated only 10–15€day forecast skill in the Gulf Stream region based on anomaly correlation >0.6 versus 30 days or more in some regions (Smedstad et€al. 2003; Shriver et€al. 2007; Hurlburt et€al. 2008a; Chassignet et€al. 2009; Hurlburt et€al. 2009). Future goals are improved and more consistently realistic simulations of the Gulf Stream, increased ability to nowcast and forecast it on time scales up to a month, improved climate prediction in the Gulf
564
H. E. Hurlburt et al.
Stream region, and increased efforts to understand OGCM dynamics and dynamically evaluate their simulations in other regions. A set of eddy-resolving global and basin-scale simulations from HYCOM, MICOM, NEMO, and POP is used in the evaluation (see Table€21.1). The resolution and model domain range from 1/10° Atlantic to 1/25° global. In addition to simulations with a realistic Gulf Stream pathway and dynamics consistent with observations, simulations with several types of flaws are evaluated, including (a) a realistic pathway with unrealistic dynamics, (b) overshoot pathways, (c) premature separation south of Cape Hatteras (the observed location), (d) pathways that separate at Cape Hatteras but have a pathway segment that is too far south east of the separation point, (e) pathways that bifurcate at or after separation, and (f) pathways impacted by unrealistic behavior upstream of the separation point, such as excessive variability or persistent large seaward loops east of the observed mean pathway. All four of the models used here have simulated a variety of Gulf Stream pathways, as illustrated here and in the references cited above. To streamline the evaluation for the purpose of this discussion, we focus on the following: (1) To evaluate the mean path, mean SSH from the model is overlaid by the 15-year mean Gulf Stream IR northwall pathwayâ•›±1 (standard deviation) by Cornillon and Sirkes (unpublished). This frontal pathway has 0.1º longitudinal resolution and lies along the northern edge of the Gulf Stream. (2) SSH variability is used to look for a narrow band of high variability west of 69ºW and, combined with abyssal eddy kinetic energy (EKE), it is used to identify regions of baroclinic instability. Thus these fields help identify the dynamics of Gulf Stream pathway segments and source regions for eddy-driven mean abyssal currents. (3) Mean speed at the core of the current is used to assess whether or not the simulated Gulf Stream inertial jet is consistent with observations near the western boundary. (4) The DWBC (a term used to identify mean abyssal currents that are clearly part of the AMOC) and eddy-driven mean abyssal currents are used to assess their impact in steering the Gulf Stream pathway and related upper ocean features. Depending on their strength and location in relation to the isobaths, abyssal currents have the potential to improve or increase the errors in the simulated pathways. (5) Both the strength and depth structure of the AMOC can affect the Gulf Stream pathway. Increasing the strength can make the simulated Gulf Stream more inertial, but can also increase the tendency for an overshoot pathway based on linear dynamics. The depth structure of the AMOC influences the depths of the isobaths followed by the DWBC and interactions between the DWBC and the eddy-driven abyssal circulation. (6) The basin-wide linear solution response to the mean wind stress forcing yields the constraints of linear dynamics on the strength and pathways of wind-driven currents in the Gulf Stream region. CAV trajectories were not calculated because there is sufficient proxy information to assess this from the mean pathway, the mean core speed near separation, and the characteristic narrow band of SSH variability along the Gulf Stream west of ~69ºW. In the dynamical evaluation we focus on a segment of the Gulf Stream that extends from 30ºN, 80ºW, upstream of the observed separation latitude near 35.5ºN, 74.5ºW, to about 68ºW, and characterizations of accuracy refer to pathway segments
14.1 18.0
Global HYCOM Global HYCOM
32 coordinate surfaces 32 coordinate surfaces
Twin of 1/12° 14.1 but no tides d Twin of 1/12° 9.7 with tides Near twin of 1/25° 4.0
Near twin of 1/12° global 18.0
c
Comments
Global HYCOM 4.0 1/25° 32 coordinate surfaces Near twin of 1/12° 18.0 Section€21.4 Simulations and hindcasts Global HYCOM 5.8 1/12° 32 coordinate surfaces 2004–2006 No data assimilation e 60.5 1/12° 32 coordinate surfaces 2004–2006 Global HYCOM Cooper-Haines Global HYCOM 19.0 1/12° 32 coordinate surfaces 6/2007–5/2008 No data assimilation e Global HYCOM 74.2 1/12° 32 coordinate surfaces 6/2007–5/2008 MODAS synthetics a MICOM Miami Isopycnic Coordinate Ocean Model, isopycnal coordinates on a C-grid, NEMO Nucleus for European Modelling of the Ocean, z-levels with terrain-following coordinates in shallow water on a C-grid, POP Parallel Ocean Program, z-levels on a B-grid; HYCOM HYbrid Coordinate Ocean Model, hybrid isopycnal/pressure levels/terrain-following in shallow water on a C-grid; b Resolution for each prognostic variable; c Twin of 1/12° global HYCOM18.0 except for the model domain and relaxation to temperature (T) and salinity (S) climatology in buffer zones within 3° of the model boundaries at 28°S and 80°N, global HYCOM experiments are from the GLBa series and all HYCOM experiments use topogrophy based on DBDB2 by D.S.K. (see http://www.7320. nrlssc.navy.mil/DBDB2_WWW); d Includes external and internal tides from 8 tidal constituents (Arbic et€al. 2010); e Downward projection method for the SSH updates, i.e. Cooper and Haines (1996) or synthetic T&S profiles using the Modular Ocean Data Assimilation System (MODAS) (Fox et€al. 2002). In both cases the Navy Coupled Ocean Data Assimilation (NCODA) system (Cummings 2005) was then used to assimilate all the data
1/12° 1/12°
2004–2007 4–6 and 9–10 3, 5–10
12–15 2004–2007
32 coordinate surfaces 32 coordinate surfaces
9.4 9.7
Global HYCOM Global HYCOM
1/12° 1/12°
1982–1983 2004–2006 2004–2006 1998–2000 3–6 and 11–13
Years used
Table 21.1↜渀 Description of OGCM simulations and hindcasts used for dynamical analysis Experiment Horizontal Vertical Resolution Ocean Modela numbera Resolutionb Section€21.3 Simulations Atlantic MICOM 1.0 1/12° 20 coordinate surfaces T46 1/12° 50 levels Atlantic NEMO Global NEMO T103 1/12° 50 levels Atlantic POP 14x 1/10° 40 levels Atlantic HYCOM 1.8 1/12° 32 coordinate surfaces
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example 565
566
H. E. Hurlburt et al.
and other features within this region, even though a larger region may be depicted in some figures. In Sect.€21.3.1 we present the mass transport streamfunction from linear simulations forced by wind stress products used in forcing the OGCM simulations discussed later. In Sect.€21.3.2 we discuss four simulations with a realistic Gulf Stream pathway and quite realistic Gulf Stream dynamics. In the remaining subsections we discuss simulations with different types of flaws, outlined earlier, including a simulation with a realistic Gulf Stream pathway but unrealistic separation dynamics. In each case one to four examples are used to help illustrate the range of simulated results and dynamics. None of the simulations in Sect.€21.3 include ocean data assimilation. Additionally, simulations in Sects.€21.2 and 21.3.1 are characterized by mid-latitude resolution (in º), whereas OGCMs in Sects.€21.3 and 21.4 are characterized by equatorial resolution, making 1/16º resolution in Sects.€21.2 and 21.3.1 approximately the same as 1/12º resolution for OGCMs in Sects.€21.3 and 21.4, ~7€km at mid-latitudes.
21.3.1 L inear model Gulf Stream Simulations from Wind Stress Products Used for OGCMs in Sects.€21.3 and 21.4 Linear barotropic solutions were obtained for the wind stress forcing products used by OGCM simulations discussed in Sects.€21.3 and 21.4. The solutions were obtained with the same model used in Sect.€21.2.1, but here excluding a contribution from the AMOC. Also, the model was run in barotropic, flat bottom mode rather than reduced gravity mode, which yields the same mean transport streamfunction. Figure€21.10 depicts the Atlantic mass transport streamfunction from 1/16º linear barotropic simulations forced by several different wind products, but only covering the latitude range of interest here. The wind stress products used to obtain the results in Fig.€21.10 are (a) an interim European Centre for Medium-Range Weather Forecasts (ECMWF) reanalysis mean over the years 2004–2006, (b) mean operational ECMWF over 2004–2006, (c) a 1978–2002 climatology derived using an ECMWF 40-year reanalysis (ERA-40) (Kallberg et€ al. 2004) and (d) a 2003–2008 climatology derived from the Navy Operational Global Atmospheric Prediction System (NOGAPS) (Rosmond et€al. 2002). In both (c) and (d) the wind stress was calculated from 10€m winds using a bulk formula from Kara et€al. (2005) and with the 10€m wind speeds corrected using a monthly QuikSCAT scatterometer climatology (Kara et€al. 2009). For (e) an ERA-15 (Gibson et€al. 1999) climatology was used and in (f) the ECMWF TOGA global surface analysis 1985-early 2001, based on operational Fig. 21.10↜渀 Mean transport streamfunction (ψ) from 1/16º linear barotropic flat bottom simulations forced by monthly mean wind stress from a An interim ECMWF reanalysis over 2004–2006. b Operational ECMWF over 2004–2006. c A 1978–2002 climatology from ECMWF ERA-40 with wind speed corrected by a QuikSCAT scatterometer climatology. d NOGAPS over 2003– 2008 also with the QuikSCAT correction. e An ECMWF ERA-15 climatology, and f An ECMWF TOGA global surface analysis over 1998–2000. The contour interval is 1 Sv
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example 45 N 40 N
8
20
12
16
a
4 12
8
0
40 N 16
35 N
0
4
8
12
12
20 24
b
4
0
8
4
16
4
4
45 N
4
4
8
12
8
0
16
35 N
45 N
12
0
567
12
6
16
1
0
8 12
8
4
40 N
45 N
28
35 N
20
2
16 4
0 2
c
24
8
12
16
4
0 12
8
12
0
12
4
8
4 12 4
16
24
0 8
e
8
12
4
8
16 20
35 N
45 N
12
0
40 N
0
24
d
8
45 N
20
8
35 N
16
428
220
16
8
4
40 N
4
12
4
40 N
4
8
12
8
0
12
4
20
30 N f 80 W
16
35 N
16
70 W
60 W
12
50 W
8
40 W
30 W
20 W
10 W
0
568
H. E. Hurlburt et al.
ECMWF products, was used (Smith et€al. 2000; Bryan et€al. 2007). In the latter 10€m winds were converted to surface stresses using the neutral drag coefficient of Large and Pond (1981). Note that 5 out of the 6 wind stress products are linked to ECMWF and one to NOGAPS. Although a temporal mean of the interannual wind products was used to force the linear simulations, otherwise the wind products listed above were used in forcing the simulations listed in Table€21.1: (a) 1/12° Atlantic NEMO, (b) 1/12° global NEMO, (c) all of the HYCOM simulations except as noted, (d) 1/12° global HYCOM 19.0 and 74.2, (e) 1/12° Atlantic MICOM and 1/12° global HYCOM 5.8 and 60.5, and (f) 1/10° Atlantic POP. The resulting streamfunctions are all generally similar in the Gulf Stream region and quite different from that simulated using the smoothed Hellerman and Rosenstein (1983) wind stress climatology (Fig.€21.1). They separate from the western boundary with transports ranging from 20 to 27€Sv versus 30€Sv from smoothed Hellerman-Rosenstein. In all, a large majority of the streamfunction contours separate from the western boundary north of the observed Gulf Stream separation latitude (35.5ºN) and at least 50% separate between 35º and 40ºN and trend eastnortheastward after separation. The two wind stress products with the QuikSCATcorrected wind speeds give the strongest transports in Fig.€21.10 ((c) 26€Sv from ERA-40/QuikSCAT and (d) 27€ Sv from NOGAPS/QuikSCAT). It is significant that almost all of the streamfunction contours driven by these two products leave the western boundary north of the observed Gulf Stream separation latitude. In the case of smoothed Hellerman-Rosenstein, 17€Sv leave the western boundary north of 35.5ºN, suggesting an even stronger tendency for the wind stress products used in Fig.€21.10c, d to drive an overshoot pathway in the OGCM simulations.
21.3.2 Simulations with a Realistic Gulf Stream Pathway Figure€21.11 presents mean SSH and Fig.€21.12 SSH variability from four simulations with a realistic Gulf Stream pathway in the segment of interest between separation from the western boundary and 68ºW. These are 1/12º Atlantic MICOM (Figs.€21.11a, 21.12a), 1/12º global NEMO (Figs.€21.11b, 21.12b), 1/12º Atlantic HYCOM (Figs.€21.11c, 21.12c), and 1/25º global HYCOM (Figs.€21.11d, 21.12d). In the simulations, the mean IR northwall frontal pathway (in red) closely follows the northern edge of the simulated Gulf Stream over the segment of interest and the simulated pathway is generally realistic within the plot domain. In 1/12º Atlantic
Fig. 21.11↜渀 Mean SSH from simulations with a realistic Gulf Stream pathway: a 1/12º Atlantic MICOM over 1982–1983. b 1/12º global NEMO over 2004–2006. c 1/12º Atlantic HYCOM-1.8, years 3–6. d 1/25º global HYCOM-4.0, years 5–8 (see Table€21.1). The contour interval is 5€cm, a contour interval for SSH used throughout Sects.€21.3 and 21.4. The mean Gulf Stream IR northwall frontal pathway€±1 by Cornillon and Sirkes is overlaid in red on each panel and in red or black on many other panels. Sep vâ•›=â•›mean speed at the Gulf Stream core near separation from the western boundary. Sep v is also given on other mean SSH and near surface current figure panels
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
1
569
6HSY PV
1
1
1 1
$WO0,&20\UV
D 6HSY PV
1
1
1 1
E
*OREDO1(027\UV
6HSY PV
1
1
1 1
F
$WO+<&20\UV
6HSY PV
1
1
G
:
*OREDO+<&20\UV
:
:
:
:
:
:
:
:
570
H. E. Hurlburt et al. 40 30 20 10 0 (cm)
45 N
40 N
35 N
30 N
a
1/12° Atl MICOM-1.0, yrs 1982-1983 40 30 20 10 0
45 N
40 N
35 N
30 N
1/12° Global NEMO-T103, yrs 2004-2006
b 40 30 20 10 0
45 N
40 N
35 N
30 N
1/12° Atl HYCOM -1.8, yrs 3-6
c 40 30 20 10 0
45 N
40 N
35 N
d
80 W
1/25° Global HYCOM-4.0, yrs 5-8 75 W
70 W
65 W
60 W
55 W
50 W
45 W
40 W
Fig. 21.12↜渀 Mean SSH variability from the same four simulations as Fig.€21.11. The contour interval is 2€cm with white spanning 18–20€cm on all SSH variability figure panels
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
571
MICOM (Fig.€ 21.12a) and HYCOM (Fig.€ 21.12c) there is an associated narrow band of high SSH variability west of 69ºW, as observed (Fig.€21.8). In 1/12º global NEMO (Fig.€ 21.12b) and 1/25º global HYCOM (Fig.€ 21.12d) the band of SSH variability remains relatively narrow west of 69ºW, but in 1/12º global NEMO there is a bulge in variability near 72ºW and in 1/25º global HYCOM there is a broader band of high variability over the segment of interest and higher variability than observed south of the separation latitude. The blob of high variability near 72ºW in 1/12º global NEMO (similar to that seen in Fig.€21.9c, but not in satellite altimetry) suggests that, as in the 1/32º simulation of Figs.€21.2c and 21.9c, the abyssal current cross-under near 72ºW is slightly perturbing the simulated Gulf Stream pathway. As in Figs.€21.2c, 21.11b depicts a straightening of the pathway segment between 73º and 70ºW in accord with the overlaid mean IR northwall pathway. In 1/25º global HYCOM (Fig.€21.12d) the broader band of variability is a consequence of small meanders generated south of the separation point that propagate into the segment of interest and a slight northward progression of the simulated pathway over the 4-year mean (years 5–8 after initialization from climatology). To assess the inertial character of the separating jet in relation to observational evidence, we use the Eulerian mean maximum speed at the core of the jet near the separation point, a relevant location where pathway variability is quite low in most simulations. In comparison to the observed range from 1.6 to 2.1€m/s, the 1/12º Atlantic HYCOM simulation is within the observed range at 1.72€m/s, 1/12º Atlantic MICOM and 1/25º global HYCOM are at the low end with 1.55€m/s, and 1/12º global NEMO is 12% below the range at 1.41€m/s. Although all of the simulations exhibit a realistic Gulf Stream pathway over the segment of interest, overall, the 1/12º Atlantic MICOM simulation is in closest agreement with the relevant observational evidence. Therefore, the other three simulations are discussed in relation to that one and the observational evidence. In representing the mean abyssal currents, a depth average from ~3,000€m to the bottom is appropriate. Except for MICOM, approximately that depth range was used for all the simulations, but with bottom trapped currents often extended over slightly shallower depths. In MICOM layer 15 is very thick. Since MICOM is isopycnal, the layer interfaces vary in depth, but typically the top of layer 15 is ~2,000€m and the bottom ~3,600€m deep. Although depths this shallow in some OGCM simulations include features that are not bottom trapped (e.g. see Fig.€8 and related discussion in Hecht and Smith 2008), such features are not evident when layer 15 is included in the MICOM depth average over the abyssal layers (Fig.€ 21.13a). The current segments that cross under the Gulf Stream to deeper depths near 72 and 68.5ºW are weaker by ~1€cm/s (one color contour) when layer 15 is included, but that is the most negative effect. Otherwise, adding layer 15 gives a broader picture of the MICOM abyssal circulation. The mean abyssal circulation in MICOM (Fig.€ 21.13a) depicts the key abyssal current observed crossing under the Gulf Stream near 68.5ºW (Fig.€21.5), the observed cyclonic gyre centered near 37ºN, 71ºW, and the strong current along the gentle escarpment observed at 34.5ºN, 71.1ºW (Fig.€21.6). It is also consistent with the RAFOS float trajectory in Bower and Hunt (2000, their Fig.€7j) and the
572 42 N
40 N
H. E. Hurlburt et al. 12 10 8 6 4 2 0
25 cm/s
12 10 8 6 4 2 0
1/12° Atl MICOM-1.0, yrs 1982-1983
(cm/s)
25 cm/s
1/12° Global NEMO-T103, yrs 2004-2006
(cm/s)
38 N
36 N
42 N
40 N
a 12 10 8 6 4 2 0
b
25 cm/s
12 10 8 6 4 2 0
1/12° Atl HYCOM-1.8, yrs 3-6
25 cm/s
1/25° Global HYCOM-4.0, yrs 5-8
(cm/s)
(cm/s)
s 38 N
36 N
34 N
c
75 W
70 W
65 W
d
75 W
70 W
65 W
Fig. 21.13↜渀 Mean abyssal currents (↜arrows) overlaid on their isotachs (in color with a 1€cm/s contour interval and bathymetric depth contours at intervals of 200 (500) m at depths >(<) 3,000€m from the same four simulations as Fig.€ 21.11). The reference vector length for the currents is 25€cm/s (↜black arrow over land). All panels with mean abyssal currents are labeled the same way, except as noted
observation-based schematic in Schmitz and McCartney (1993, their Fig.€12a), both described in Sect.€21.2.3. These features in MICOM are very similar to those in Fig.€ 21.4a, including similar dual pathways approaching the Gulf Stream crossunder near 72ºW. In both cases the eastern pathway crosses under and retroflects to the east, while the western one continues along the continental slope, the latter via different pathways in MICOM and Fig.€21.4a. In MICOM the western pathway follows a sharp westward turn in the 3,200€m isobath along the northern edge of the simulated Gulf Stream, while in Fig.€21.4a it crosses under the Gulf Stream and continues along the 3,600–4,000€m isobaths of the continental slope south of the stream. RAFOS float trajectories at ~3,500€m depth in Bower and Hunt (2000, their Fig.€ 7) support the existence of this deeper pathway along the continental slope, which is missing in the MICOM simulation.
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
573
All four of the simulations exhibit the cross-under pathway near 72ºW, the dual pathways feeding into it from the north side, the associated cyclonic abyssal gyre, and west-southwestward flow along the gentle escarpment (Fig.€21.13). Except in MICOM, this cross-under flow is augmented by an anticyclonic abyssal gyre on the western side (also seen in Fig.€21.4a), most strongly in 1/12º global NEMO and progressively more weakly in 1/12º Atlantic HYCOM and 1/25º global HYCOM. Unlike the MICOM and NEMO simulations, 1/12º and 1/25º HYCOM simulate the observed continuation of flow along the continental slope from the southern end of the ~72ºW cross-under, in addition to the retroflection to the east simulated by all four. All but 1/12º global NEMO simulate the key abyssal current near 68.5ºW (Fig.€21.5), most strongly in MICOM and with progressively weaker dual pathways feeding in from the north side in 1/25º and 1/12º HYCOM. As additional constraints on the modeled Gulf Stream pathway, 1/12º global NEMO, 1/12º Atlantic HYCOM, and 1/25º global HYCOM simulate abyssal Gulf Stream cross-under pathways near 67.5ºW and 65.5ºW not seen in MICOM and Fig.€21.4a. Before completely crossing under the simulated Gulf Stream, both turn to become roughly antiparallel to the model Gulf Stream in a west-southwestward direction along the 4,800–4,900€m isobaths in NEMO and the 4,600–4,800€m isobaths in the two HYCOM simulations. This abyssal current joins the observed pathway near 68.5ºW, where it turns southward. In the process it advects the southern edge of the model Gulf Stream southward, forming the eastern edge of one lobe of the western nonlinear recirculation gyre on the south side of the Gulf Stream in NEMO (Fig.€21.11b) and two lobes in the two HYCOM simulations (Fig.€21.11c, d). This effect is not seen in the MICOM simulation (Fig.€21.11a) or the mean SSH (Fig.€21.2c) related to Fig.€21.4a. These two simulations do not exhibit the two eastern cross-unders or the resulting west-southwestward flow along the 4,600–4,900 isobaths between ~65.5ºW and ~68.5ºW. No observational evidence was found to confirm or dispute the existence of these abyssal currents. At >30€Sv in the latitude range of Fig.€21.11, the AMOC in MICOM (Fig.€21.14a) is the strongest of all the simulations considered here, but at the same time the mean speed of the current core near separation is relatively weak, suggesting a relatively weak wind-driven contribution (Fig.€21.10e) and a strong contribution to the abyssal circulation from the AMOC. The depth range of the incoming DWBC (Fig.€21.13a) is conducive to the realistic abyssal current pathways simulated by MICOM. The other three simulations have a ~20€Sv AMOC (Fig.€21.14b–d) and a shallower southward abyssal limb.
21.3.3 S imulation with a Realistic Gulf Stream Pathway and Unrealistic Dynamics In Sects.€21.2 and 21.3.2 it was evident that ocean models could simulate a realistic mean Gulf Stream pathway with generally realistic dynamics without exhibiting
574
H. E. Hurlburt et al.
D
E
*OREDO1(027\UV
$WO0,&20\UV
'HSWKNP
'HSWKNP
F
1
$WO+<&20\UV 1
1 ± ±
1
G
1
*OREDO+<&20\UV 1
1
1
1
LQ6Y
Fig. 21.14↜渀 Atlantic meridional overturning circulation (AMOC) streamfunction from the same four simulations as Fig.€ 21.11. An AMOC streamfunction contour interval of 2.5€ Sv is used throughout. The AMOC is northward in the upper ocean and southward in the abyssal ocean
complete agreement with related observational evidence. Here we examine a simulation with a realistic Gulf Stream pathway comparable to the present state of the art. In our region of interest (the separation point to ~68ºW) the mean pathway is only slightly too far south (Fig.€21.15a). At 1.34€m/s the mean core speed of the separating jet is 16% below the observed range of 1.6–2.1€m/s. The 12€Sv AMOC over 35–40ºN is weaker than found in Sect.€21.3.2 and its southward abyssal flow is too shallow (not shown). As a result, abyssal currents along the continental slope (Fig.€21.15c) are weaker than in any of the simulations shown in Fig.€21.13. The abyssal current crossing under the Gulf Stream near 72ºW is present, including the two branches feeding in from the north side, but it is relatively weak. Cross-unders farther to the east are extremely weak. Then what are the dynamics of the separating jet? The large area of high SSH variability west of 68ºW (Fig.€ 21.15b) is a strong indication that the separation is not associated with CAV trajectory dynamics. (1) The broad area of high SSH
1/12° Global HYCOM - 9.4, yrs 12-15 45 N
Sep v = 1.34 m/s
40 N
35 N
30 N
a 40 30 20 10 0
45 N
cm 40 N
35 N
30 N b 80 W
75 W
70 W
65 W
60 W
55 W
50 W
45 W
40 W
35 W
25 cm/s
12 10 8 6 4 2
40 N
0
cm/s
38 N
36 N
34 N c 76 W
74 W
72 W
70 W
68 W
66 W
Fig. 21.15↜渀 Means from 1/12º global HYCOM-9.4, years 12–15 (Table€21.1), a simulation with a realistic mean Gulf Stream pathway but unrealistic dynamics: a SSH. b SSH variability, and c abyssal currents, isotachs, and depth contours. Sep vâ•›=â•›1.34€ m/s, the mean speed at the Gulf Stream core near separation from the western boundary
576
H. E. Hurlburt et al.
variability extending north and south of the mean pathway, (2) the large amplitude southern recirculation gyre west of ~69ºW (Fig.€21.15a), (3) the eddy-driven mean abyssal gyre (circling 35ºN, 72ºN in Fig.€21.15c) that is centered directly beneath the surface gyre, and (4) the associated high deep EKE (not shown) are evidence of strong baroclinic instability that encompasses the Gulf Stream and the strong southern recirculation gyre. The separating Gulf Stream pathway lies along the northern edge of the southern recirculation gyre. The eddy-driven mean abyssal gyre lies over relatively flat topography located between regions of sloping topography to the northwest and southeast. The location of the eddy-driven mean abyssal gyre adjacent to the southern edge of the separating jet and the location of the northwesternmost flat topography suggest a topographic role in the pathway of the separating jet, because baroclinic instability and the eddy-driven abyssal gyre would be inhibited if positioned farther north or south over sloping topography. In Sects.€21.2 and 21.3.2 some mean abyssal gyres lie over sloping topography directly beneath the Gulf Stream, including examples of an anticyclonic abyssal gyre located northwest of the anticyclonic gyre just discussed (see Figs.€21.4a and 21.13b–d). In those simulations flow instabilities are limited to a narrow band of high variability along the Gulf Stream (as observed in Fig.€21.8) and the mean abyssal gyres form adjacent to Gulf Stream cross-unders in locations where the slopes of the Gulf Stream thermocline base and the topography are closely enough matched to permit regions of quite uniform potential vorticity. In flat bottom Gulf Stream simulations a barotropic relationship is expected between eddy-driven mean abyssal and upper ocean gyres with the mean Gulf Stream pathway along the northern (southern) edge of southern (northern) recirculation gyres, as discussed dynamically in Hurlburt and Hogan (2008), but in simulations with non-flat topography the relationship between such gyres is not necessarily barotropic, as illustrated in other subsections. Further, given the lack of abyssal current assistance east of 72ºW, one might anticipate an overshoot pathway. The linear wind-driven simulation (Fig.€ 21.10c) gives the second strongest tendency for an overshoot pathway of all the linear solutions in Figs.€21.1 and 21.10 (nearly equal to the strongest tendency in Fig.€21.10d), but the 12€Sv AMOC is equal to the weakest of the nonlinear simulations studied here. Returning to the abyssal circulation, we see two relatively weak abyssal currents crossing southward under the Gulf Stream to deeper depths on the north side of the eddy-driven abyssal gyre. The western one crosses under the Gulf Stream between 73 and 74ºW and retroflects to the east along the northern edge of the eddy-driven abyssal gyre and the boundary between the Gulf Stream and southern recirculation gyre. The second one near 72ºW crosses under without retroflecting and continues southward along the eastern and southern edge of the eddy-driven abyssal gyre. Thus these abyssal currents tend to counteract tendencies for a northward displacement of the Gulf Stream and the adjacent southern recirculation gyre that lies between ~75 and 68ºW. The preceding discussion demonstrates that the simulated Gulf pathway between the western boundary and ~69ºW lies in a region of strong baroclinic
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
577
instability. Thus, while the 4-year mean Gulf Stream pathway in Fig.€21.15a is quite realistic, the dynamics of the separating jet are unrealistic. The simulation is inconsistent with the observed SSH variability and relevant abyssal current observations. The mean core speed near separation is below the range of observational evidence. The AMOC is too weak and its southward abyssal limb is too shallow.
21.3.4 S imulations with a Pathway That Overshoots the Observed Latitude Pathways that overshoot the observed latitude of the Gulf Stream are more characteristic of eddy-permitting OGCMs (Barnier et€al. 2006; Bryan et€al. 2007), but they can also occur in eddy-resolving OGCMs, as shown in Barnier et€al. (2006), Hecht and Smith (2008), and Fig.€ 21.16. In the discussion of overshoot pathways, the 1/12º Atlantic HYCOM simulation, shown in Figs.€21.16a, b and 21.17a, b, is used as the pivotal experiment and the two global HYCOM simulations are discussed in relation to this one. The 1/12º (Figs.€21.16e, f and 21.17e, f) and 1/25º (Figs.€21.16c, d and 21.17c, d) global HYCOM configurations are identical (including the initialization from climatology and the use of model years 9–10 after initialization), except for the horizontal resolution, friction/diffusion parameters tied to resolution, and the effects of resolution on the bottom topography. All three simulations use the same wind stress forcing as the HYCOM simulation in Sect.€21.3.3 with a weak AMOC, but they use a later version of HYCOM and a modification to salinity relaxation designed to increase the AMOC. The mean over years 3–6 of the pivotal simulation yielded a realistic Gulf Stream pathway (Fig.€21.11c), but over time the simulation developed the overshoot pathway shown in the mean over years 11–13 in Fig.€21.16a. Due to the modification of the sea surface salinity relaxation, the salinity in the Gulf Stream and in pathways all the way to the Nordic Seas increased over time. As a result, the salinity and density of the Denmark Straits overflow into the subpolar Atlantic also increased. In addition, the strength of the AMOC increased from a mean of ~22€Sv over years 3–6 (Fig.€21.14c) to ~27€Sv over years 11–13 in the latitude range of interest (Fig.€ 21.17b). In the process the mean core speed of the Gulf Stream near separation increased from 1.72 to 2.15€m/s, making it the most inertial at separation of all the simulations considered and at the top end of observational values. We know from Sect.€21.2 that the AMOC contributes to the demands of linear dynamics for an overshoot pathway, but why did that happen in the Atlantic HYCOM simulation and not in the MICOM simulation, which has a slightly stronger AMOC and a less inertial Gulf Stream near separation? Also, why did the two global HYCOM simulations (with less impact from modified salinity relaxation) develop an overshoot pathway with an AMOC of ~20€Sv and with only a 5% increase over earlier periods (e.g. Fig.€21.14d vs. Fig.€21.17d for 1/25º global HY-
578
H. E. Hurlburt et al.
42 N
1/12° Atl HYCOM-1.8, yrs 11-13 120
40 N
100 cm/s
25 cm/s
12
100
10
80
8
60
6
40
4 2
20
0
0
38 N cm / s
cm/s
36 N
42 N
a
Sep v = 2.15 m / s
b
1/25° Global HYCOM- 4.0, yrs 9-10 120
40 N
100 cm/s
12
100
10
80
8
60
6
40
4
20
2 0
0
38 N
25 cm/s
(d)
36 N
34 N
c
Sep v = 1.82 m/s
d
1/12° Global HYCOM-18.0, yrs 9-10 120
40 N
100 100 cm/s cm / s
12
100
10
80
8
60
6
40
4
20
2
0
0
25 25cm/s cm/s
38 N
36 N
e
34 N 76 W
Sep v = 1.65 m / s 74 W
72 W
70 W
68 W
f
66 W 76 W
74 W
72 W
70 W
68 W
66 W
Fig. 21.16↜渀 Mean currents at ~25€m depth (a, c, e) and mean abyssal currents (b, d, f) from three simulations with a Gulf Stream pathway that overshoots the observed separation latitude, (a, b) 1/12º Atlantic HYCOM-1.8, years 11–13, (c, d) 1/25º global HYCOM-4.0, years 9–10, and (e, f) 1/12º global HYCOM-18.0, years 9–10 (Table€21.1). In all panels mean currents (↜arrows) are overlaid on their isotachs (↜in color) with contour intervals of 10 (1) cm/s for near surface (abyssal) currents and on depth contours. The reference current vector is 1 (0.25) m/s for near surface (abyssal) currents. The two global simulations are near twins designed to test the impact of increasing the horizontal resolution
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
E
1
1
'HSWKNP
1
D
579
G
1
1
1 1
'HSWKNP
1
$WO+<&20\UV
F
*OREDO+<&20\UV
H
I
1
1
1 :
'HSWKNP
1
*OREDO+<&20\UV :
:
:
:
:
:
:
:
LQFP
1
1 LQ6Y
1
± ±
1
1
Fig. 21.17↜渀 SSH variability (a, c, e) and mean AMOC streamfunction (b, d, f) from the same simulations as Fig.€21.16
COM), when the overshoot did not occur (e.g., Fig.€21.11d vs. Fig.€21.16c for 1/25º global HYCOM)? The three HYCOM simulations with an overshoot pathway all used the ERA-40/QuikSCAT climatological wind stress forcing that gives a strong tendency for an overshoot pathway based on linear dynamics; see Sect.€21.3.1 and Fig.€21.10c. Further, the AMOC of the HYCOM simulations is relatively concentrated over depths of 2,000–3,000€m in the latitude range of Fig.€21.11, both before (Fig.€21.14c, d) and after (Fig.€21.17b, d, f) developing the overshoot pathway, while the AMOC from MICOM is more uniformly distributed with depth (Fig.€21.14a). Correspondingly, the DWBC in the HYCOM simulations is concentrated along shallower isobaths (Fig.€21.13c, d and Fig.€21.16b, d, f) than in the MICOM simulation (Fig.€21.13a). As discussed in Sect.€21.2, that places a greater burden on the eddy-driven abyssal circulation to generate the essential abyssal currents that cross under the Gulf Stream, either directly (Figs.€21.2d and 21.3d) or through interaction with the DWBC (Figs.€21.2a, e and 21.3a, e). To facilitate discernment of the rela-
580
H. E. Hurlburt et al.
tionships between near-surface currents, abyssal currents, and the topography, mean currents at the depth of the current core and mean abyssal currents are overlaid with topographic contours and the mean northwall frontal pathway in the same region for all three simulations (Fig.€21.16). The northward penetration of the overshoot is greatest in the 1/12º Atlantic HYCOM simulation (Fig.€ 21.16a), as expected from the strength of the AMOC. In Fig.€16a the simulated Gulf Stream follows the shelf break and continental slope to a location east of 72ºW, where a ridge in the topography exists in the depth range 2,200–2,800€m. The other two (Fig.€21.16c, e) reach that point over deeper isobaths of the continental slope, separated from the shelf break. At the location east of 72ºW the main core of the current separates from the steeper part of the continental slope and the inshore portion continues along the shelf break and continental slope in all three simulations, although a large portion of the inshore flow does that farther up stream in Fig.€21.16e (1/12º global HYCOM). The source of this bifurcation is a strong southward abyssal current nearly perpendicular to the Gulf Stream along the eastern side of the ridge (Fig.€21.16b, d, f). Part of that current then becomes antiparallel to the Gulf Stream along the south side of the ridge. Since there is no ridge inshore of ~2,200€m, the inshore portion of the bifurcation continues along the shelf break. An additional portion on the inshore side of the stream joins the portion along the shelf break between 69º and 67ºW in Fig.€21.16a, c, where the joining current flows nearly parallel to the isobaths and antiparallel to the underlying strong abyssal currents (Fig.€21.16b, d). At the location where the main core of the Gulf Stream separates from the steeper continental slope in all three simulations, the underlying abyssal current also bifurcates (Fig.€21.16b, d, f). One branch continues westward along the continental slope until most of it crosses southward under the Gulf Stream along the western slope of a valley west of 72ºW, while the second branch continues southward under the Gulf Stream east of 72ºW, again along the western slope of a valley. In the process both abyssal currents cross isobaths from 2,600 to 3,600€m depth beneath the Gulf Stream, in accord with the theory of Hogg and Stommel (1985), before all (part) of them join a southwestward abyssal current along the 3,200–3,600€m isobaths that flows nearly antiparallel along the southeastern side of the simulated Gulf Stream in the Atlantic (global) simulations. East of 71ºW there is evidence that abyssal currents play a role in splitting the main jet, with the northern branch continuing eastward along 38–39ºN and the southern branch forming a large mean meander in the 1/12º Atlantic (Fig.€21.16a, b) and 1/25º global HYCOM (Fig.€ 21.16c, d) simulations. In the 1/12º global HYCOM simulation there is mainly a simple mean meander centered near 68ºW (Fig.€21.16e). In all three examples this is a region of high SSH variability with evidence of eddy generation on the south side of the stream (Fig.€ 21.17a, c, e) and high abyssal EKE (not shown). In addition, there is an eddy-driven abyssal gyre centered over a westward trough in the topography near 36.7ºN, 68ºW (Fig.€ 21.16b, d, f). In the 1/12º global HYCOM simulation (Fig.€21.16e, f) the Gulf Stream approaches the trough from the west at a slightly lower latitude than the other two simulations and the abyssal gyre is centered directly beneath the
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
581
mean meander in a baroclinic relationship such that the east side of the gyre tends to advect the pathway northward, the northern and southern sides are antiparallel and parallel, and the western side tends to advect the pathway southward. In the other two simulations (Fig.€21.16a–d) the northern side of the abyssal gyre lies within the path of the approaching Gulf Stream and the western side of the abyssal gyre splits the jet, advecting the southern part southward while the northern side continues eastward.
21.3.5 S imulation with Premature Separation from the Western Boundary The 1/12º Atlantic NEMO simulation exhibits premature separation from the western boundary near 34ºN (Fig.€21.18a). Like the 1/12º Atlantic HYCOM simulation with an overshoot pathway in Sect.€21.3.4, the Gulf Stream core speed near separation is toward the high end of the observational evidence (1.9€m/s). However, in the latitude range of the separated jet, the AMOC (Fig.€21.18c) is weaker than in the MICOM simulation and the 1/12º Atlantic HYCOM overshoot simulation and similar to most of the other simulations with a realistic or overshoot pathway. The linear wind-driven contribution to the simulated Gulf Stream pathway again indicates a tendency for an overshoot pathway (Fig.€ 21.10a), but a tendency that is weaker than for the simulations in Sect.€21.3.4. In addition, most of the southward transport of the AMOC lies between ~1,500 and 3,000€m south of 43°N, a depth structure largely set by the southern boundary (not shown). Also, within the latitude range of Fig.€21.18a a relatively large amount of deep water formation occurs compared to the HYCOM and MICOM simulations. In Fig.€ 21.18d the relationship between the mean abyssal circulation (below 2,800€m) and the simulated Gulf Stream pathway is obvious, a mean abyssal current crossing under the Gulf Stream where it separates from the western boundary. In the process the abyssal flow crosses the 3,300–4,400€ m isobaths. The crossunder is fed by an abyssal current along the ~3,200–3,600€m isobaths as well as abyssal currents along continental slope isobaths shallower than 2,800€m. A similar abyssal current along the 3,200–3,600€m isobaths is seen in the 1/12º Atlantic HYCOM simulation (Fig.€21.16b), but in that case the current lies east of the overshooting Gulf Stream pathway. The 1/12º Atlantic NEMO simulation is missing abyssal currents along the 3,100 and 4,100–4,400€m isobaths that crossed under the Gulf Stream in the MICOM simulation (Fig.€21.13a). It is missing the latter near 68.5ºW as a consequence of its shallow southward flow in the AMOC. In this example the simulated Gulf Stream pathway after separation parallels the underlying abyssal current eastward to ~65ºW. There, an anticyclonic abyssal gyre splits off the southern edge of the model Gulf Stream to form part of the eastern boundary of a southern recirculation gyre. Between 71º and 70ºW the northern edge of the model Gulf Stream is split off by the eastern edge of an elongated cyclonic abyssal gyre. The split-off current subsequently turns east-northeastward to become a
582
45 N
H. E. Hurlburt et al. 1/12° Atl NEMO -T46, yrs 2004 -2006 Sep v = 1.90 m/s
40 N 35 N
a
(in cm)
45 N 30 40 N
20
35 N
10
30 N b 80 W 0
0 60 W
(in Sv) 35
20
1 Depth (km)
40 W
30 25 20
2
15 10 5
3 4
0 –5
c
25 N
35 N
45 N
55 N
65 N
– 10
25 cm/s (in cm/s) 12
40 N
11 10 9 8 7 6 5 4 3 2
35 N
1
d 75 W
70 W
65 W
Fig. 21.18↜渀 Means from 1/12º Atlantic NEMO over 2004–2006 (Table€21.1), a simulation with premature separation from the western boundary: (a) SSH, (b) SSH variability, (c) AMOC streamfunction, and (d) abyssal currents, isotachs, and depth contours. Sep vâ•›=â•›1.9€m/s
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
583
northern branch of the Gulf Stream flowing antiparallel to the underlying abyssal currents.
21.3.6 S imulations with a Pathway Segment too far South After Separation at Cape Hatteras The simulations in this section are characterized by a Gulf Stream pathway segment that is too far south after separation from the western boundary at the observed separation latitude, as seen in the mean SSH (Fig.€21.19) and the SSH variability (Fig.€21.20). The simulated Gulf Stream pathway in Fig.€21.19c actually exhibits premature separation (discussed in Sect.€21.3.5), but is included here because it is a near twin of the simulation in Fig.€21.19d. Figure€21.19a is a mean over years 4–6 from the same 1/12º global HYCOM simulation that developed an overshoot pathway in years 9–10 (Sect.€21.3.4, Fig.€21.16e). Figure€21.19b is from a 1/10º Atlantic simulation by the Los Alamos POP model. The simulations in Fig.€21.19c, d are from 1/12º global HYCOM later in a chain of simulations that included the one with a realistic Gulf Stream pathway and unrealistic dynamics (Sect.€21.3.3, Fig.€21.15a) and a weak AMOC (not shown). The interannually-forced simulations in Fig.€21.19c, d are nearly identical except that one includes external and internal tides with 8 tidal constituents (Arbic et€al. 2010; Fig.€21.19d), while the other excludes tides (Fig.€21.19c), as do all the other simulations considered here. Like the simulation with premature separation in Sect.€21.3.5 (Fig.€21.18a), all four of the simulations in Fig.€21.19 have abyssal current flow crossing to deeper depths under the Gulf Stream where it separates from the western boundary (Fig.€21.21), but slightly farther north in the simulations with separation at Cape Hatteras (Fig.€21.19a, b, d). In each case cross-under currents feed into an eddydriven abyssal gyre centered near or slightly south of 35ºN, 72ºW over relatively flat topography, as in Fig.€ 21.18d (simulation with premature separation) and Fig.€21.15c (simulation with a realistic Gulf Stream pathway but unrealistic dynamics). In Fig.€21.15c this gyre is substantially weaker than the others. The abyssal gyres in Fig.€ 21.21 underlie a surface gyre along the southern edge of the Gulf Stream (Fig.€21.19) and are associated with high SSH variability (Fig.€21.20) and high abyssal EKE (not shown), evidence of strong baroclinic instability. In relation to the underlying abyssal currents, the subsequent pathway eastward to ~65ºW in three of the simulations (Figs.€21.19a, b, d and 21.21a, b, d) is quite similar to that in the simulation with premature separation (Fig.€21.18). In Fig.€21.19c the Gulf Stream bifurcates near 69ºW with the northern pathway becoming weak and diffuse in the mean and generally paralleling abyssal currents (Fig.€21.21c). The southern pathway shows a dip driven by converging abyssal currents near 71ºW with the upper ocean current subsequently flowing antiparallel to the underlying abyssal current until the southern part is steered southward by an abyssal current near 68ºW to form the eastern edge of a southern recirculation gyre lobe.
Sep v = 1.48 m/s
45 N
40 N
35 N
30 N
1/12° Global HYCOM-18.0, yrs 4-6
a Sep v = 1.85 m/s
45 N
40 N
35 N
30 N
1/10° Atl POP-14x, yrs 1998 - 2000
b Sep v = 1.3 m/s
45 N
40 N
35 N
30 N
1/12° Global HYCOM-9.7, yrs 2004-2007
c Sep v = 1.52 m/s
45 N
40 N
35 N
d
80 W
1/12° Global HYCOM-14.1, yrs 2004-2007 75 W
70 W
65 W
60 W
55 W
50 W
45 W
40 W
Fig. 21.19↜渀 Mean SSH from simulations with a Gulf Stream pathway segment too far south after separation from the western boundary. a 1/12º global HYCOM-18.0, years 4–6. b 1/10º Atlantic POP-14x, 1998–2000 discussed in Hecht et€al. (2008). c 1/12º global HYCOM-9.7, 2004–2007, and d 1/12º global HYCOM-14.1, 2004–2007, a twin of (c) with the addition of external and internal tides. (See Table€21.1)
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
585
1
FP
1
1
1
*OREDO+<&20\UV
D
1
1
1
1
$WO323[\UV
E
1
1
1
1
*OREDO+<&20\UV
F
1
1
1
G
:
*OREDO+<&20\UV :
:
:
:
:
:
Fig. 21.20↜渀 SSH variability from the same four simulations as Fig.€21.19
:
:
586 1
1
H. E. Hurlburt et al. FPV *OREDO+<&20\UV
FPV
$WO323[\UV
FPV
FPV
1
1
1
D
E
FPV
1
FPV
*OREDO+<&20\UV
*OREDO+<&20\UV
1
1
1 F :
:
:
:
:
:
G
:
:
:
:
:
Fig. 21.21↜渀 Mean abyssal currents overlaid on isotachs and depth from the same four simulations as Fig.€ 21.19. The depth contours in the HYCOM simulations (a, c, d) are at 200€ m intervals below 3,000€m as before, but the POP simulation (b) has full cell topography. Thus the bathymetric contours mark 250€m step boundaries at and below 3,000€m, not depths. Below 3,000€m the regions between step boundary contours are plateaus of constant depth, which are depth-labeled accordingly
A comparison of the simulations with and without tides indicates a modest improvement with the addition of tides. The tides slightly strengthen the AMOC and deepen the southward flow (Fig.€21.22d with tides vs. Fig.€21.22c). The result is a relatively weak but well-defined DWBC along the continental slope versus none east of 70ºW in the simulation without tides within the depth range plotted. Surprisingly, the 1/12º HYCOM (Figs.€ 21.19–21.22, panel a) and 1/10º POP (Figs.€21.19–21.22, panel b) simulations exhibit greater similarity than the simulations with and without tides (Figs.€21.19–21.22, panels c and d), despite the differences in model design and atmospheric forcing (linear wind-driven simulation for HYCOM in Fig.€21.10c and for POP in 21.10f). HYCOM is a hybrid coordinate ocean model on a C-grid with an isopycnal interior and partial step topography, while POP is a z-level model on a B-grid with full step topography, the only one
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
587
0
17.5
1 Depth (km)
15 2
3
4 0
a
1/10° Atl POP-14x, yrs 1998-2000
10
1 Depth (km)
b
1/12° Global HYCOM-18.0, yrs 4-6
10
2
3
4
c
25 N
1/12° Global HYCOM-9.7, yrs 2004-2007 35 N
45 N – 10 – 5
55 N 0
d
1/12° Global HYCOM-14.1,yrs 2004-2007
5
10
15
45 N
35 N
65 N
20
25
30
35
55 N
65 N
(in Sv)
Fig. 21.22↜渀 Mean AMOC streamfunction from the same four simulations as Fig.€21.19
in this chapter with a B-grid or full step topography. In the other simulations the topographic contours are depth contours with a 200€m contour interval, while at and below 2,750€m the contours in the POP simulation mark the boundary between steps at 250€m intervals and the regions between contours are plateaus of constant depth. Despite these differences the mean Gulf Stream pathway within the segment of interest (Fig.€ 21.19a, b) and the mean abyssal circulation (Fig.€ 21.21a, b) are very similar, taking into account the tendency for abyssal currents to concentrate along the step boundaries in the full cell topography (Fig.€21.21b). The strength of the AMOC in the latitude range of Fig.€21.10 is similar, but the southward flow is shallower in HYCOM and there is more deep water formation in this latitude range in POP, as in NEMO, the other z-level model. Most of the simulations depict a pair of eddy-driven cyclonic gyres (both observed, Figs.€21.5 and 21.6), the western one centered near 37ºN, 71ºW over sloping topography and the eastern one centered near 36.7ºN, 68ºW over a westward trough in the topography, the no tides simulation in Fig.€21.21c being a notable exception
588
H. E. Hurlburt et al.
by lacking both. In the simulations with a realistic Gulf Stream pathway and realistic dynamics (Figs.€21.2a, c–e and 21.11), the western gyre lies directly beneath the Gulf Stream in association with abyssal current cross-unders along the northern and western sides (Figs.€21.3a, c, e, 21.4a, and 21.13), except in the simulation with no DWBC (Fig.€21.3d). The same is true for the eastern gyre in the subset of these simulations that exhibit the observed cross-under (Fig.€21.5) near 68.5€W (mean SSH in Figs. 21.2a, c–e and 21.11a; mean abyssal currents in Figs. 21.3a, c–e, 21.4a, and 21.13a). Here the corresponding abyssal gyres in HYCOM and POP (Fig.€21.21a and b, respectively) are substantially stronger than in the other simulations (except for the eastern gyre in comparison to the simulations with an overshoot pathway) and the gyre currents are strongest along the northern and southern sides of the gyres. In addition, the western gyres are displaced ~1º to the south-southeast, a distance less than the southward displacement of the Gulf Stream pathways. Although the mean Gulf Streams are broader than the mean abyssal currents, they generally follow along the southern sides of these abyssal gyres and the abyssal currents in both gyres cross isobaths to deeper depths where they flow toward the southern side of the stream, inhibiting its northward displacement to a more realistic latitude. In each simulation the western gyre has an associated cyclonic upper ocean gyre adjacent to the north side of the model Gulf Stream with the center of the surface gyre displaced ~1/2–1º northwest of the abyssal gyre center. In both simulations the abyssal current on the northern side of the eastern abyssal gyre crosses under the Gulf Stream to shallower depths, broadening the mean Gulf Stream pathway to the north, while the southern side of the abyssal gyre acts to help maintain a more southern pathway. In the 1/12º global HYCOM simulation the western abyssal gyre weakens over time, the upper cyclonic gyre on the north side of the stream dissipates, and the year 8 mean exhibits a realistic mean pathway. However, by years 9–10 this abyssal gyre has moved westward and nearly dissipated (Fig.€21.16f) and the simulation has developed an overshoot pathway (Fig.€21.16e).
21.3.7 G ulf Stream Pathways and Variability Upstream of Separation at Cape Hatteras Two main types of Gulf Stream variability are observed in the South Atlantic Bight (SAB) upstream of the separation point at Cape Hatteras, smaller and larger amplitude meanders, as illustrated in Plate 1 of Glenn and Ebbesmeyer (1994b). Compared to the larger meanders, the smaller ones are characterized by ~3× shorter event time scales, 2× faster propagation speeds (35–60€km/day), and a shore side meander amplitude that remains inshore of the 600€m isobath versus offshore of it for tens of kilometers (Bane and Dewar 1988). Both types of meander propagate northeastward from the vicinity of the Charleston Bump (Fig.€21.23) and often cyclonic eddies form on the inshore side of the meanders (Glenn and Ebbesmeyer 1994a, b). The small meanders are clearly illustrated by Xie et€ al. (2007, their Fig.€ 2) in a sequence of daily satellite SST snapshots over a 6-day period.
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
589
36 N
35 N
34 N
t
en
33 N
ke
a
Bl
m rp
ca
Es
e ak Bl
32 N
ge
id rR te
Ou
Ch ar les bu ton m p
a m ha Ba
31 N
30 N
Blake
Plateau 29 N Blake Nose 28 N 81 W
80 W
79 W
78 W
77 W
76 W
75 W
74 W
73 W
72 W
71 W
70 W
Fig. 21.23↜渀 Topographic map of the South Atlantic Bight (SAB). The Blake Bahama Outer Ridge, the Blake Escarpment, the Blake Nose, the Blake Plateau, and the Charleston bump are topographic features that influence the dynamics of the Gulf Stream and the DWBC. The topographic contour interval is 100€m
This variability has a 4–5€day time scale (Legeckis 1979; Glenn and Ebbesmeyer 1994b) and is a component of the observed SSH variability depicted in Fig.€21.8 as a narrow band beginning south of 32ºN and extending along the Gulf Stream pathway past Cape Hatteras, a trajectory observed for individual eddy-meander features (Glenn and Ebbesmeyer 1994a, b). The SSH variability exceeds 20€cm near 32ºN and is 15–20€ cm along the Gulf Stream pathway downstream until larger variability occurs past the separation point. Such variability is seen in nearly every SSH variability panel in Sects.€21.3 and 21.4 except for simulations by 1/25º global HYCOM (e.g. Figs.€21.12d and 21.17b), where there is a slightly broader band of higher variability, and simulations where there is a broad band of much higher variability near the separation point (e.g. Figs.€21.15b and 21.20c). The observed variability has been attributed to barotropic and baroclinic instability, especially near the Charleston bump, based on observational evidence (Bane and Dewar 1988) and modeling studies. For example, Xie et€al. (2007) investigated the effects of coastline curvature and the Charleston bump and found that both enhanced these Gulf Stream flow instabilities in the SAB.
590
H. E. Hurlburt et al.
36 N
1/25° Global HYCOM-4.0, yrs 5-8 120
30
100 cm/s
60
34 N
25 cm/s
15 0
0
cm/s
cm/s
32 N
30 N
36 N
a
b 12 10 8 6 4 2 0
34 N
400
25 cm/s
25 cm/s
200 0
cm/s
cm/s
32 N
30 N
28 N
c
80 W
78 W
76 W
74 W
72 W
d
80 W
78 W
76 W
74 W
72 W
Fig. 21.24↜渀 a–c Mean currents (↜arrows) and isotachs (↜in color) overlaid on topographic contours in the SAB from 1/25° global HYCOM-4.0, years 5–8 (Table€21.1): a Near surface currents at ~25€m depth. b Depth average to the bottom starting from a depth of ~300€m near the inshore edge of the Gulf Stream and a depth of ~800€m seaward of the Gulf Stream in order to depict near bottom flow over the 400–850€m depth range of the Charleston bump and the Blake Plateau. c Depth average over ~2,000€m to the bottom to depict abyssal currents along the Blake Escarpment and over the Blake Bahama Outer Ridge, and d like b with isotachs replaced by EKE. The reference current vector lengths are 1€m/s in a and 0.25€m/s in b–d. The color contour intervals are 12€cm/s in (a), 3€cm/s in (b), 1€cm/s in (c), and 40€cm2/s2 in (d2). The topographic contour interval is 200€m
The unrealistically high variability in 1/25º global HYCOM is driven by excessive flow instabilities over the Charleston bump and the adjacent Blake Plateau, which has nearly flat topography near 800€m depth (Fig.€21.23). Signatures of the relative strength of baroclinic instability are evident by comparing near bottom EKE and mean flow from 1/25º global HYCOM (EKE in Fig.€21.24d and mean flow in Fig.€21.24b) with corresponding results from a near twin 1/12º global HYCOM simulation (EKE in Fig.€21.25d and mean flow in Fig.€21.25b). The latter does not exhibit excessive SSH variability in the region (SSH variability from 1/25º global HYCOM in Fig.€21.12d versus 1/12º global HYCOM in Fig.€21.20a over the same model years used in Figs.€21.24
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
120
1/12° Global HYCOM-18.0, yrs 4-6 100 cm/s
30
60 34 N
591
25 cm/s
15
0 cm/s
0 cm/s
a
b
32 N
30 N
12 10 8 6 4 34 N 2 0 cm/s
400
25 cm/s
25 cm/s
200 0
cm/s
32 N
30 N
c
80 W
78 W
76 W
74 W
72 W
d
80 W
78 W
76 W
74 W
72 W
Fig. 21.25↜渀 Same as Fig.€21.24 except that results are from 1/12° global HYCOM-18.0, years 4–6 (Table€21.1)
and 21.25). The near bottom EKE is much higher in the 1/25º global HYCOM simulation and extends farther southward over the Blake Plateau. In addition, there is a closed eddy-driven bottom gyre not seen in the 1/12º simulation. Both the 1/12° and 1/25° simulations exhibit a small mean offshore meander immediately downstream of the Charleston bump where the inner edge of the mean currents temporarily follows a deeper isobath (400€m) (1/25º in Fig.€21.24a and 1/12º in Fig.€21.25a). The second type of variability is larger amplitude meanders similar to that seen in Fig.€21.26a, c, but in observations (e.g., Bane and Dewar 1988; Glenn and Ebbesmeyer 1994b, their Plate 1; Legeckis et€ al. 2002, their Fig.€ 5) such features are transient on times scales up to a few months, whereas Fig.€21.26 depicts means over year 3 of the 1/25º global HYCOM simulation. Animations of SSH, used to monitor the simulated variability, also show the larger meanders occurring as transients in other years of the 1/25º simulation and in the near twin 1/12º simulation, where the amplitude is smaller. In addition, the animations show highly variable meandering in year 3 of the 1/25º simulation (Fig.€21.26b). These meanders tend to pause and
592 45 N
H. E. Hurlburt et al. 40
1/25° Global HYCOM-4.0, yr 3
30 20
40 N
10 0
cm
35 N
a
80 W
75 W
36 N
120
70 W
65 W
60 W
55 W
50 W
45 W
40 W
b
80 W 75 W 12 10 8 6 4 2 0
100 cm/s
60
34 N
0
70 W
25 cm/s
65W
60 W
55 W
cm/s
cm/s
50 W
45 W
40 W
12 10 8 6 4 2 0
32 N
30 N
c
80 W
78 W
76 W
74 W
72 W
d
70 W
76 W
74 W
e
76 W
74 W
Fig. 21.26↜渀 Simulation of an unrealistic mean meander upstream of Gulf Stream separation from 1/25° global HYCOM-4.0, year 3 (Table€21.1): a Mean SSH. b SSH variability, and c–e Mean currents (↜arrows) and isotachs (↜color) overlaid on topographic contours in the SAB. c Near surface currents at ~25€m depth and d, e Depth averages from d ~2,000€m to the bottom and e ~3,000€m to the bottom. The color contour intervals are a, b 5€cm. c 12€cm/s, and d, e 1€cm/s
amplify between the northern end of the Charleston bump near 32ºN and ~33ºN in a region where the northern boundary of the Blake Bahama Outer Ridge separates from the Blake Escarpment near 2,000€m depth (Fig.€21.23). Unrealistic mean meanders, such as seen in Fig.€21.26a, c, occasionally occur in simulations by a variety of ocean models, including the z-level Los Alamos POP model (Smith et€al. 2000), the MICOM isopycnal model (Chassignet and Marshall 2008, their Fig.€9), NLOM (Hurlburt and Hogan 2008), and here HYCOM. They have been controlled by increasing coefficients of biharmonic dissipation (Smith et€ al. 2000; Chassignet and Marshall 2008) or by injecting the DWBC northern boundary inflow along deeper isobaths in simulations like those in Sect.€21.2. This type of unrealistic meander (e.g., Fig.€21.26a, c) occurs when there are strong mean abyssal currents along the 2,700–3,200€m isobaths where the Blake Bahama Outer Ridge separates from the Blake Escarpment. These currents flow southeastward along the north side of the Blake Bahama Outer Ridge near the ridge crest. In the process they cross under the Gulf Stream and advect it off shore (Fig.€21.26c–e). The cross-under to deeper depths facilitates the advection process because it allows the abyssal currents to better follow the north side of the deepening ridge crest, as can be seen by comparing Fig.€ 21.26d, e, the latter limited to deeper depths. Particularly, compare the current along the northeastern side of the ridge from the location where it separates from the Blake Escarpment to the 3,000€m isobath where
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
593
part of the abyssal current turns southwestward from the ridge crest to the Blake Nose and flows along the 2,800–3,200€m isobaths, approximately antiparallel to the outer edge of the Gulf Stream. Here the 3,000€m isobath lies near the outer edge of the Gulf Stream mean flow that continues to the northeast. Note in comparison to the abyssal current responsible for offshore advection of the Gulf Stream, the corresponding current mean over years 5–8 from the same simulation (Fig.€21.24c) is much weaker and in the 1/12º simulation (Fig.€21.25c) it is nearly absent. Also note the Blake Nose is a junction where abyssal currents following a variety of pathways along different isobaths meet to form a strong DWBC flowing southward along the steep slope of the Blake Escarpment (Figs.€21.24c, 21.25c, and 21.26d, e). The conduit for abyssal currents from the ocean interior, discussed in Sect. 21.2 and earlier subsections of Sect.€21.3, rejoins the abyssal flow along the continental slope near 33ºN. Generally, this flow remains seaward of the Gulf Stream. However, in this case (Fig.€21.26c–e), it crosses under the offshore loop of the Gulf Stream simulation. As it flows southwestward on the north side of the loop, it crosses to shallower depths beneath the Gulf Stream and acts to advect its pathway shoreward, keeping it adjacent to the western boundary just upstream of separation at Cape Hatteras. This occurs not only because of the abyssal current pathway from the interior, but also because of the westward intrusion of relatively deep isobaths to the base of a very steep segment of the continental slope. Chassignet and Marshall (2008, their Fig.€9) also depict the mean Gulf Stream loop returning to the western boundary prior to separation. As the cross-under abyssal current continues to the south, the speed of the current greatly increases in a confluence along the steep northern slope of the Blake Bahama Outer Ridge, demonstrating the potential for even larger amplitude meanders with a lobe extending southeastward. That did not occur in the case of the loop simulated in Fig.€21.26a, c because the abyssal currents along the 2,800–3,200€m isobaths did not advect it far enough off shore for the deeper abyssal currents to continue the offshore advection process. It has been suggested that the development of large amplitude meanders in the SAB is triggered by Gulf Stream interaction with cold core rings from the Sargasso Sea, rather than offshore deflections of the Gulf Stream at the Charleston bump (Glenn and Ebbesmeyer 1994b). The 10-year SSH animations from both the 1/12 and 1/25º global HYCOM simulations, discussed in this subsection, show such interactions occurring. Year 8 of 1/25º global HYCOM even demonstrates the generation of a strong modon when a strong anticyclonic eddy adjacent to the elbow of a large meander triggers the shedding of a cyclonic eddy from the elbow on the north side of the anticyclonic eddy. The modon subsequently propagated eastward to 62ºW centered along ~28ºN. A modon generated in a similar manner is illustrated near 27ºN, 69ºW in Hurlburt and Hogan (2000, their Fig.€4d). Although Gulf Stream—eddy interaction was involved in the development and evolution of some large meanders, many of them developed without such interaction. This result indicates that flow instabilities in the vicinity of the Charleston bump are sufficient to generate such meanders (near bottom EKE in Figs.€21.24d and 21.25d and near bottom mean currents in Figs.€21.24b and 21.25b over the Charleston bump and adjacent Blake Plateau), with a possible contribution from abyssal current advection
594
H. E. Hurlburt et al.
of the Gulf Stream pathway where the north slope of the Blake Bahama Outer Ridge separates from the Blake Escarpment near 33°N.
21.4â•…Impact of Data Assimilation on Model Dynamics in the Gulf Stream Region We can investigate the impact of data assimilation on the Gulf Stream pathway and dynamics using a set of near-twin experiments with the 1/12º global HYCOM. Each of these experiments starts from an initial state forced by ECMWF climatology for the surface momentum and heat. However, starting points for the two experiments are different. The first set of twin experiments starts from a spin up forced by an ERA-15 climatology, while the second set starts from a spin up forced by an ERA-40 climatology with the 10€m winds increased using QuikSCAT wind speed statistics (Kara et€al. 2009). The interannual forced simulations use the Navy Operational Global Atmospheric Prediction System (NOGAPS) fluxes with an adjustment of the mean winds to the ECMWF climatology in the first experiment and in the second experiment with a scaling of the 10€m wind speeds using QuikSCAT wind speed statistics without the adjustment to the ECMWF climatology. For this discussion, the mean is taken over the last 3 years (2004–2006) for the first experiment and only one year (06/2007 to 05/2008) in the second experiment. Two data assimilative hindcasts (data assimilative model runs not performed in real time) are used in this study. In both hindcasts, the data are assimilated via the Navy Coupled Ocean Data Assimilation system (NCODA, Cummings 2005) using a multivariate optimal interpolation. Both use the same atmospheric forcing as the corresponding non-assimilative simulation. The difference between the data assimilation in the two hindcasts is the treatment of the along-track altimetric sea surface height anomalies (SSHAs). The first hindcast, which will be designated as C-H assimilation, is a twin of the 4 year non-assimilative interannual forced simulation with the SSH updates extended into the ocean interior by adjusting the layer thickness, as proposed by Cooper and Haines (1996). The SSH updates are obtained (a) by adding the altimetric SSHAs to a model-based mean SSH and then (b) using NCODA to perform an SSH analysis with a model forecast as the first guess. For this purpose, the mean SSH from the ERA-15 forced climatological simulation, used to initialize the hindcast, is adjusted to the observed mean Gulf Stream pathway via a rubber-sheeting technique (Smedstad et€al. 2003). The last three years of the corresponding simulation and hindcast are used for analysis. The second hindcast, which will be designated as MODAS assimilation, uses the Modular Ocean Data Assimilation System (MODAS, Fox et€al. 2002, Barron et€al. 2007) to extend the SSHA into the ocean interior via synthetic profiles of temperature and salinity, with the mean surface dynamic height coming from the MODAS climatology. It is a twin of the 2007–2008 simulation. Only the time period covering the last year of the 1.5 year hindcast is used for analysis. For more information on the 1/12° global HYCOM prediction system see Hurlburt et€al. (2008a) and Chassignet et€al. (2009).
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
595
21.4.1 I nterannually Forced Simulation with a Weak Gulf Stream The mean velocity in layer 6, ~25€m deep, for each of the four simulations is shown in Fig.€21.27 along with the 15 year mean IR northwall pathway. The interannual simulation, initialized from the ERA-15 spin up, generates a weak Gulf Stream (Fig.€21.27a). The Eulerian mean core speed south of the separation point is 1.1€m/s and it decreases rapidly to the east becoming <0.4€m/s near 72°W. The weak Gulf Stream is associated with weak mean abyssal currents, as shown in Fig.€ 21.28a. The key southward abyssal current at 72°W is weak with a speed <0.4€cm/s and displaced to the south. The observed key current at 68.5°W is absent as are two observed deep cyclonic gyres (Figs.€ 21.5 and 21.6). The strongest abyssal flows
120 100 80 60 40 20 0
42 N 40 N
120 100 80 60 40 20 0
1/12° no assimilation yrs 2004-2006 100 cm/s
cm/s
1/12° no assimilation yrs 06/2007-05/2008 100 cm/s
38 N 36 N 34 N 42 N 40 N
a
b 120 100 80 60 40 20 0
120 100 80 60 40 20 0
1/12° C-H assimilation yrs 2004-2006 100 cm/s
1/12° MODAS assimilation yrs 06/2007-05/2008 100 cm/s
38 N 36 N 34 N
c
76 W
72 W
68 W
64 W
d
76 W
72 W
68 W
64 W
Fig. 21.27↜渀 Mean velocities in layer 6, ~25€m, with the 15-year mean Gulf Stream northwall pathway ±1σ by Cornillon and Sirkes overlaid in red and the bathymetry contoured at 200 (500)€m intervals at depths >(<) 3,000€ m from four 1/12° global HYCOM simulations or hindcasts: a Interannually forced weak Gulf Stream with separation velocity of 1.1€m/s, 12° global HYCOM5.8. b Interannually forced stronger Gulf Stream with separation velocity of 1.4€m/s, 1/12° global HYCOM-19.0. c Cooper and Haines (1996) data assimilative twin, 1/12° global HYCOM-60.5, of the weak Gulf Stream simulation, and d MODAS synthetic temperature and salinity profile data assimilative twin, 1/12° global HYCOM-74.2, of the stronger Gulf Stream simulation (see Table€21.1)
596
H. E. Hurlburt et al. 12 10 8 6 4 2 0
42 N
12 10 8 6 4 2 0
1/12˚ no assimilation yrs 2004-2006 25 cm/s
40 N cm/s
1/12˚ no assimilation yrs 06/2007-05/2008 25 cm/s
cm/s
38 N
36 N
34 N
a
42 N
40 N
b 12 10 8 6 4 2 0
12 10 8 6 4 2 0
1/12˚ C-H assimilation yrs 2004-2006
cm/s
25 cm/s
1/12˚ MODAS assimilation yrs 06/2007-05/2008 25 cm/s
cm/s
38 N
36 N
34 N
c
76 W
72 W
68 W
64 W
d
76 W
72 W
68 W
64 W
Fig. 21.28↜渀 Mean depth averaged velocities in layers 27 to 29, below ~3,000€m depth, with the 15-year mean Gulf Stream northwall pathway ±1σ by Cornillon and Sirkes overlaid in red and the bathymetry contoured at 200 (500)€m intervals at depths >(<) 3,000€m from the same four 1/12° global HYCOM simulations or hindcasts as Fig.€21.27: a Interannually forced weak Gulf Stream with separation velocity of 1.1€m/s. b Interannually forced stronger Gulf Stream with separation velocity of 1.4€m/s. c Cooper and Haines (1996) data assimilation twin of the weak Gulf Stream and d MODAS synthetic temperature and salinity profile data assimilation twin of the stronger Gulf Stream
are found in an anticylonic gyre near (36°N, 66°W), which steers the Gulf Stream slightly northward. Near Cape Hatteras, the mean Gulf Stream shows two pathways, one path overshooting the separation point and clinging to the continental slope while another pathway with most of the flow turns almost due east. After separation, the mean pathway lies southward of the mean IR pathway. The AMOC also is weak and shallow with a transport less than 11€Sv (Fig.€21.29a). Evidence for weak baroclinic instability can be found in (1) the large area of high SSH variability west of 70°W (Fig.€21.30a), (2) a weak southern recirculation gyre west of 70°W (Fig.€21.27a), (3) the eddy-driven mean abyssal gyre centered directly beneath the surface gyre over relatively flat topography (Fig.€21.28a) and (4) the associated deep EKE (not shown). The separating Gulf Stream pathway lies along the northern edge of the associated recirculation gyres. The location of the eddy-
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
597
0
10
Depth (km)
1
15
2
3
4 0
1/12˚ no assimilation yrs 06/2007-05/2008
b
10
1 Depth (km)
1/12˚ no assimilation yrs 2004-2006
a
17.5
2
3
4
1/12˚ C-H assimilation yrs 2004-2006
c 25 N
35 N
45 N
55 N
1/12˚ MODAS assimilation yrs 06/2007-05/2008
d
65 N
35 N
45 N
55 N
65 N
(in Sv) – 10
–5
0
5
10
15
20
25
30
35
Fig. 21.29↜渀 AMOC streamfunction north of 24°N from the same four 1/12° global HYCOM simulations or hindcasts as Fig.€21.27: a Interannually forced weak Gulf Stream with separation velocity of 1.1€m/s. b Interannually forced stronger Gulf Stream with separation velocity of 1.4€m/s. c Cooper and Haines (1996) data assimilation twin of the weak Gulf Stream and d MODAS synthetic temperature and salinity profile data assimilation twin of the stronger Gulf Stream
driven mean abyssal gyre relative to the southern edge of the separating jet and the location of the northwestern most flat topography suggests a topographic role in the separating pathway driven by baroclinic instability of the separating jet.
21.4.2 I nterannually Forced Simulation with a Stronger Gulf Stream The interannual simulation using ERA-15 spin up and NOGAPS winds yields a too weak Gulf Stream. Kara et€al. (2009) found that the QuikSCAT winds are highly
598
H. E. Hurlburt et al. ÛQRDVVLPLODWLRQ \UV
1
ÛQRDVVLPLODWLRQ \UV
1 1 1
D
E Û&+DVVLPLODWLRQ \UV
1
Û02'$6DVVLPLODWLRQ \UV
1 1
F
G
1 :
:
:
:
:
:
:
:
:
:
FP
Fig. 21.30↜渀 Standard deviation of the sea surface height from the same four 1/12° global HYCOM simulations or hindcasts as Fig.€ 21.27: a Interannually forced weak Gulf Stream with separation velocity of 1.1€ m/s. b Interannually forced stronger Gulf Stream with separation velocity of 1.4€m/s. c Cooper and Haines (1996) data assimilation twin of the weak Gulf Stream and d MODAS synthetic temperature and salinity profile data assimilation twin of the stronger Gulf Stream
correlated with the NOGAPS NWP winds, but that significant errors in the strength of the winds exist. Kara et€al. propose a regression based correction for the NWP winds. This correction is applied to a new model spin up using the ERA-40 climatology to generate a new initial condition for an interannual forced simulation based upon regression corrected NOGAPS winds. Only 1 year from June 2007 to May 2008 is available for analysis. The mean Gulf Stream is stronger and the path much more realistic than the simulation described in Sect. 21.4.1 The core speed in layer 6 (~25€m) near the mean separation point is 1.4€m/s and remains greater than 1€m/s at 70°W, with the speed decreasing rapidly to the east of 70°W (Fig.€21.27b). A strong recirculation gyre is observed centered at (34.5°N, 72°W). The abyssal flow shown in Fig.€21.28b is stronger than the uncorrected NOGAPS forced simulation with temporal mean winds from ERA-15. The key abyssal current at 72°W and the associated cyclonic gyre are present but the key abyssal current near 68.5°W is absent. The eddy-driven abyssal circulation with speeds < 8€cm/s, flows under the Gulf Stream near 36°N, 72°W helping to steer the flow southward into the recirculation gyre, while the northward flow in the cyclonic abyssal gyre steers the Gulf Stream northward at 70°W. The AMOC is very shallow and relatively weak, with transport less than 16€Sv (Fig.€21.29b). The SSH variability in Fig.€21.30b shows a strong recirculation gyre, but little eddy activity east of 60°W. The pathway in this simulation is more realistic, but still inconsistent with the observed SSH variability and the relevant key abyssal currents, and the AMOC is relatively weak and shal-
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
599
low. This simulation is quite similar to one discussed in Sect.€21.3.3 with a realistic pathway but unrealistic dynamics.
21.4.3 Cooper-Haines Data Assimilation The data assimilative hindcast starts from the end of the ERA-15 climatology simulation using NCODA to assimilate satellite SST, the SSH updates, and in situ temperature and salinity profiles, and also using C-H for downward projection of the SSH updates. The assimilation generates a mean Gulf Stream that follows the IR path from the coast out to 68°W, as seen in Fig.€21.27c. East of 68°W, the flow diverts southward of the IR path, turning sharply northward and splitting as the Stream crosses the New England Seamount Chain (NESC) at 64°W. The SSH variability shown in Fig.€21.30c reproduces all of the features found in the observed altimetric SSH variability shown in Fig.€ 21.8, which is an expected result since these data are assimilated. The Eulerian mean core speed of the Gulf Stream at the separation point near Cape Hatteras is relatively weak at only 1.1€m/s, which is about the same as the non-assimilative simulation. The assimilative Gulf Stream is much stronger to the east with Eulerian mean speeds of 0.8€m/s at 70°W and 0.6€m/s at 65°W. A surprising result is the strong abyssal circulation in the hindcast shown in Fig.€ 21.28c. The key abyssal currents at 72°W and 68.5°W are present with strengths of 10€cm/s and 8€cm/s respectively. The southward flow at 72°W is associated with a cyclonic gyre. As a consequence of the C-H downward projection of the SSH updates, the AMOC is strengthened, with transport greater than 18€Sv, and the southward branch of the AMOC is stronger at deeper depths than in the nonassimilative simulation (Fig.€21.29c vs. 21.29a). The result is mean abyssal currents along deeper isobaths of the continental slope. Despite a weak Gulf Stream at Cape Hatteras, data assimilation generates a vigorous eddy field that drives a strong abyssal circulation. The eddy-driven contribution to the mean abyssal circulation is the result of vortex stretching and compression associated with the data-assimilative approximation to the observed meandering of the Gulf Stream and associated eddies. In Fig.€21.28c the stronger and deeper mean abyssal currents along the continental slope feed into the abyssal currents that cross under the Gulf Stream near 69° and 72°W and interact with the eddy-driven abyssal circulation. Although we can demonstrate that the data assimilative hindcast has a mean abyssal circulation in accord with available observational evidence and a theory for Gulf Stream pathway dynamics in this region that is also supported by the observational evidence, no observational evidence is available to determine whether or not the same is true for the time dependent evolution of the abyssal circulation. The results of Hurlburt (1986) suggest that this could be occurring. In a set of two-layer model experiments with the SSH field updated every 20 or 30 days from a non-assimilative control run, the abyssal circulation converged toward the time dependent abyssal circulation of the control run even in the most challenging examples with strong baroclinic-barotropic instability and a flat bottom.
600
H. E. Hurlburt et al.
21.4.4 MODAS Data Assimilation The second data assimilative example starts on 1 June, 2007, taking its initial condition from the ERA-40 QuikSCAT scaled simulation. The assimilation is performed through NCODA with the SSHA extended into the ocean interior using synthetic profiles of temperature and salinity from MODAS. The mean Gulf Stream follows the IR path extremely well to the east past the NESC to 62°W, as shown in Fig.€21.27d. The Eulerian mean core speed near Cape Hatteras is weak, only 1.0€m/s. However, core speed is a maximum of 1.2€m/s at 72°W and exceeds 0.65€m/s at 65°W. The SSH variability reproduces the observed altimetric SSH variability (Fig.€21.30d). The eddy driven abyssal circulation is strong, with the key southward abyssal currents exceeding 10€cm/s. Each of the key currents is associated with a strong cyclonic gyre, as shown in Fig.€21.28d. The AMOC is the strongest, exceeding 20€Sv, and the southward branch extends the deepest of the four simulations or hindcasts (Fig.€21.29d). Assimilating the MODAS synthetic profiles appears to generate the most realistic Gulf Stream system with strong eddies along the entire path driving a strong abyssal circulation.
21.4.5 A Comparison of Model Forecasts to Hindcast States The daily MODAS hindcasts (Sect.€21.4.4) from 48 different dates were used to initialize a 14€ day forecast. The forecast skill of the HYCOM data assimilative model has been discussed in Hurlburt et€al. (2008a, 2009). The assimilative model significantly beats persistence out to 14 days. However, the skill of the model is insignificant after about 10 days, with the median anomaly correlation between the forecast and the analysis for SSH dropping below 0.6 beyond 10 days in the Gulf Stream region. We can compare the effects of the model dynamics on the forecasts by using the differences between the forecast mean Gulf Stream and the analyses. In Fig.€21.31, the mean velocities in layer 6 (~25€m) and layers 27–29 (~3,000€m to the bottom) from 48 forecasts are shown. The 5€day forecast has appreciable skill with a median SSHA correlation of 0.8, but the 14-day forecast has little skill. In the forecasts, we find significant changes in the upper layer flow, but only modest changes in the abyssal circulation. The 5€day forecast still tracks the mean IR path west of the NESC (64°W), but the core speeds have decreased by approximately 0.1€m/s along the entire Stream. The core speeds in the 14€ day forecast have decreased substantially with the speed at 72°W dropping below 0.8€m/s. The path of the Gulf Stream is deflected southward around 68.5°W, presumably steered southward by the strong southward abyssal current at 68.5°W. The AMOC (not shown) for the 14€day forecast is slightly weaker and shallower than either the 5€day forecast or analyses. The Gulf Stream in the forecasts is still inertial, with the variability driven by the instability of the flow. However, the dynamics of the model are insufficient to maintain a strong flow eastward to the NESC and the forecast Gulf Stream weakens
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
42 N
40 N
120
Layer 6: 5-day forecast
120
100 cm/s
60
60
0
0
a
c
601
Layer 27-29: 5-day forecast 100 cm/s
38 N
36 N
34 N
42 N
40 N
12 10 8 6 4 2 0
12 10 8 6 4 2 0
Layer 6: 14-day forecast 25 cm/s
Layer 27-29: 14-day forecast 25 cm/s
38 N
36 N
34 N
b
76 W
72 W
68 W
64 W
d
76 W
72 W
68 W
64 W
Fig. 21.31↜渀 The mean velocities for the forecasts starting from the state estimates of the MODAS data assimilation with the layer 6 (~25€m) velocities for the a 5-day forecast and the b 14-day forecast and the layer 27 to 29 (below ~3,000€m depth) velocities for the c 5-day forecast and the d 14-day forecast. The 15-year mean Gulf Stream northwall pathway ±1σ by Cornillon and Sirkes is overlaid in red and the bathymetry is contoured at 200€m intervals
over the 14€day period. The abyssal circulation appears to have a longer time scale showing little change in the mean over the 14€day forecast period. Thus, the steering by the abyssal currents helps to maintain a reasonable pathway, but the dynamics cannot maintain the strength of the Gulf Stream.
21.5â•…Summary and discussion Dynamical understanding and evaluation of current systems simulated by eddy-resolving OGCMs is an essential step in the development of accurate models to aid in a greater understanding of the ocean circulation and to improve ocean weather and climate prediction. Here the Gulf Stream is the subject of investigation, including examples with and without data assimilation, because it is a major current system that is challenging to simulate and understand and because Gulf Stream simulations have demonstrated great sensitivity to small changes in simulation design, such
602
H. E. Hurlburt et al.
as subgrid-scale parameterizations and parameter values. Several non-assimilative OGCMs of different design have yielded realistic simulations, as illustrated here, but consistently realistic results have not been obtained from any of them. Instead, these OGCMs have also demonstrated a variety of similar flaws. Key aspects of Gulf Stream dynamics were identified using a simpler eddy-resolving model. The model is purely hydrodynamic with only five Lagrangian layers in the vertical, but includes sufficiently realistic boundary geometry, bathymetry, wind forcing, and AMOC to (1) simulate the dynamics of Gulf Stream separation and its pathway to the east, (2) permit detailed comparison with relevant observations, and (3) allow a detailed explanation of the dynamics (Sect.€ 21.2). In brief form, an eddy-driven abyssal current, typically augmented by the DWBC, the local topographic configuration, and a Gulf Stream feedback mechanism constrain the latitude of the Gulf Stream near 68.5°W. Between the western boundary and ~70°W the Gulf Stream pathway closely follows a CAV trajectory. Neither part of this explanation is sufficient alone. Constraint of the Gulf Stream latitude near 68.5°W is not a sufficient explanation of the pathway between the western boundary and ~69°W. However, without assistance from this constraint, Gulf Stream simulations with realistic speeds at the core of the current are not sufficiently inertial to overcome the linear solution demand for a pathway that overshoots the observed latitude (and a CAV trajectory). The essential observational metrics are (a) agreement with the observed mean pathway, (b) agreement with the observed narrow band of high SSH variability between the western boundary and ~69°W to support the interpretation as a CAV trajectory, (c) realistic mean speed at the core of the current near separation from the western boundary (1.6–2.1€m/s), and (d) simulation of the key observed abyssal current near 68.5°W. Other observational and dynamical metrics are also useful. With the possible exception of a linear solution, starting with a simplified model is helpful but generally not a necessary step, as illustrated in the dynamical evaluation of OGCM simulations with several types of flawed dynamics. The dynamics of simulations by four different eddy-resolving OGCMs (HYCOM, MICOM, NEMO, and POP) without data assimilation are evaluated in Sect.€21.3. The horizontal resolution and model domain range from 1/10° Atlantic to 1/25° global. Simulations by both the simplified model and the OGCMs demonstrate that it is possible to simulate a realistic Gulf Stream pathway with generally realistic dynamics without showing complete agreement with the relevant observational evidence, but with evidence of a possible sacrifice in simulation robustness, discussed later. In particular, three OGCM simulations yield a realistic Gulf Stream pathway, but do not simulate the key abyssal current near 68.5°W. Instead two mean abyssal currents that cross under the simulated Gulf Stream near 65.5 and 67.5°W perform essentially the same function at the appropriate latitude, but without the observational evidence to confirm or refute their existence. Another simulation has a realistic mean Gulf Stream pathway, but fails in relation to the other observational metrics and thus simulates a realistic pathway with unrealistic dynamics. The remainder of Sect.€21.3 addressed dynamical evaluation of simulations with common types of flawed pathways upstream and downstream of Gulf Stream sepa-
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
603
ration at Cape Hatteras. Downstream the simulations may have premature separation, pathway segments that are too far south west of roughly 67°W, or pathways that overshoot the observed latitude west of ~69°W. Upstream problems occur with unrealistic looping away from the coast and excessive small amplitude meandering that propagates downstream of Cape Hatteras and unrealistically increases the variability in the region within a few degrees after separation. The simulations demonstrate the greatest sensitivity to differences in AMOC strength and the depth structure of its southward abyssal flow, the abyssal currents in relation to the isobaths, the wind forcing, the inertial character of the separating jet, and the horizontal model resolution. In relation to pathway simulation skill, they demonstrate less sensitivity to differences in model design and model topography, even though there is great sensitivity to topographic features. The biggest problem occurs with the shallow vertical structure of the simulated AMOC southward abyssal flow and its relation to the isobaths because of its impact on the pathways of abyssal currents upstream and downstream of Gulf Stream separation. Upstream of separation, an unrealistically persistent Gulf Stream meander away from the coast can occur when strong mean abyssal currents follow isobaths in the depth range 2,700–3,200€m. Currents in that depth range separate from the Blake Escarpment and flow southeastward along the north slope of the Blake Bahama Outer Ridge near the ridge crest. In the process they cross under the Gulf Stream, advecting its pathway off shore, a problem occasionally seen in almost all of the models. Downstream of Cape Hatteras, flow along particular, relatively deep isobaths is required to generate the key abyssal current near 68.5°W or the alternatives near 67.5 and 65.5°W. These abyssal currents are very weak or missing in the simulations with premature separation, a pathway segment with a southern bias, or a realistic pathway with unrealistic dynamics. The depth of the AMOC-related abyssal currents along the isobaths can directly affect the Gulf Stream pathway through advection but can also affect the stability properties of the simulated Gulf Stream and the interaction between AMOC-driven and eddy-driven abyssal currents. Thus the hypersensitivity of Gulf Stream simulations to small changes, such as subgridscale parameterizations and parameter values, can be traced to a hypersensitivity to the location of abyssal currents in relation to the isobaths and to the need for flow along particular isobaths in order to constrain the latitude of the Gulf Stream near 68°W. This sensitivity is aggravated by the tendency of ocean models to simulate an AMOC with a southward abyssal limb that is too shallow. The simulations with a southern pathway bias exhibit abyssal currents, partly originating along isobaths shallower than 3,000€m, which cross under the simulated Gulf Stream to deeper depths at the separation point. They also depict an eddydriven mean abyssal gyre centered near 72°W over a band of flat topography, a gyre that abuts the southern edge of the Gulf Stream. The simulations exhibit additional eddy-driven abyssal gyres farther to the east along the Gulf Stream segment with a southern bias. Very similar gyres occur in a 1/10° Atlantic POP simulation and one of the 1/12° global HYCOM simulations and quite different gyres in the 1/12° Atlantic NEMO and two other 1/12° global HYCOM simulations. The two strong cyclonic abyssal gyres along the north side of the Gulf Stream in the similar POP
604
H. E. Hurlburt et al.
and HYCOM simulations also contribute to the southern pathway bias in those simulations. In all five simulations the Gulf Stream pathway follows the underlying eddy-driven abyssal currents for about 10° to the east, resulting in a mean Gulf Stream pathway that is strongly influenced by flow instabilities and thus a pathway that is dynamically very different than that observed. The three OGCM simulations with an overshoot pathway (all by HYCOM) succumb to the demands of linear dynamics for a pathway with a northern bias with both the wind forcing and the AMOC contributing to that demand. The wind forcing product that forced the OGCM simulations with an overshoot pathway also yields the strongest tendency for an overshoot pathway based on linear dynamics. The simulation with the strongest overshoot has the second strongest AMOC and the most inertial Gulf Stream at the separation point (at the top end of observed speed). The strength of the AMOC in the other two overshoot simulations is typical, but the three simulations using the same model and wind forcing with a weak AMOC simulate a pathway with a southern bias or a realistic pathway with unrealistic dynamics. Although the overshoot simulations have abyssal current pathways that might constrain the Gulf Stream pathway, the 1/12° Atlantic MICOM simulation with a realistic Gulf Stream pathway has a much more robust abyssal current constraint on the Gulf Stream pathway in close agreement with observational evidence. It also has the strongest AMOC with a southward limb that extends deeper than in the three HYCOM simulations and a mean DWBC that includes observed strong flow along isobaths that feed into the key abyssal current near 68.5°W. Bifurcations are a common occurrence in the Gulf Stream simulations and abyssal currents often play a role. Several of these roles are discussed in the section on overshoot pathways. In two of these simulations abyssal currents cause a major bifurcation of the Gulf Stream. More commonly they split off flow near the edge of the Gulf Stream and help define the eastern edge of a southern recirculation gyre. On the near-shore edge of the Gulf Stream an abyssal current constraint can also define a bifurcation between flow that separates from the shelf/slope and flow that becomes a shelf slope current. In cases where they create a bifurcation on the north side of the stream, often the result for the northern branch is a partial transition toward a pathway more consistent with linear dynamics, but a transition that can be limited by underlying abyssal currents. Results from the simpler hydrodynamic model suggest that doubling the resolution from ~7€km to 3.5€km at mid-latitudes would reduce simulation sensitivity to the AMOC by increasing the inertial character of the simulated Gulf Stream and increasing the strength of the eddy-driven abyssal circulation. A comparison of the nearly twin 1/12 and 1/25° global HYCOM simulations yields some support for this finding. Each of these simulations was run for 10 years after initialization from climatology, an initial state used by all the OGCM simulations considered here. Starting in year 2, both develop a Gulf Stream pathway with a southern bias after separation from Cape Hatteras. Later both develop a realistic pathway, but ultimately both develop an overshoot pathway. However, the 1/12° simulation has a realistic mean pathway for only one year (year 8), while the 1/25° simulation has a realistic pathway for four years (years 5–8). Both have shallower southward flow in the
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
605
AMOC and a weaker abyssal current constraint on the Gulf Stream pathway than the MICOM simulation and with a 5% increase in the strength of the AMOC and especially conducive wind forcing, both ultimately develop an overshoot pathway. Data assimilation has a strong impact on the Gulf Stream dynamics, especially on variables that are sparsely observed or, in some cases, not at all in real time (Sect.€21.4). Identical non-assimilative simulations are used as controls to help assess the impact of the data assimilation. Both of the control simulations have a mean Gulf Stream pathway with a southern bias and a weak AMOC with a southward limb that is too shallow. As a result they do not simulate flow along isobaths that yield observed abyssal currents (or their alternatives) that are relevant to a realistic Gulf Stream pathway. Two data-assimilative hindcast experiments (state estimates performed in arrears) were performed. A realistic mean Gulf Stream pathway is imposed by the mean sea surface height (SSH) that is added to the SSH anomalies from satellite altimeter track data, the key data type used to constrain the evolution of currents and eddies. The SSH updates are projected on to the stratified water column using two different techniques, Cooper-Haines in the first hindcast, synthetic temperature and salinity profiles in the second. As expected, the data assimilation improves the SSH variability. The mean strength and pathway of the Gulf Stream are strongly constrained by the mean SSH added to the SSH anomalies from altimeter data, but the resulting mean Gulf Stream is too weak. In the ocean interior the data assimilation increases the strength of the AMOC and the depth range of its southward abyssal flow. A particularly salient result is the impact of the data assimilation on the mean abyssal currents, which are very different from the upper ocean currents and not observed in the assimilated data set. Both hindcasts depict the relevant abyssal currents seen in historical in situ observations, including the key abyssal current near 68.5°W and flow along the continental slope feeding into it as well as an observed cyclonic gyre farther to the west. These abyssal currents are well maintained in the mean of 48 14-day forecasts, although some weakening of an already weak Gulf Stream occurs and median forecast skill based on anomaly correlation >0.6 is only 10 days. The abyssal currents are generated via vortex stretching and compression when the assimilated SSH updates from altimeter data plus the mean SSH are projected downward. The data assimilation approximates the observed variations in the ocean features, such as current pathways and eddies, and in response the model dynamics interpolate and extrapolate the updates. The results indicate that in the process a more realistic AMOC generates more realistic abyssal currents along the continental slope and the representation of real variations in the Gulf Stream and related eddies in the upper ocean produces a model response that simulates flow instabilities well enough to generate realistic eddy-driven mean abyssal currents and maintain them in 14-day forecasts. Acknowledgements╇ The project US Global Ocean Data Assimilation Experiment (GODAE): Global Ocean Prediction with the HYbrid Coordinate Ocean Model (HYCOM), funded under the National Ocean Partnership Program (NOPP); the 6.1 project Global Remote Littoral Forcing via Deep Water Pathways, funded by the Office of Naval Research (ONR) under program element 601153N; and grants of computer time from the US Defense Department High Performance Computing Modernization Program. Alan Wallcraft is in charge of developing and maintaining
606
H. E. Hurlburt et al.
the standard version of HYCOM and Ole Martin Smedstad the data assimilative experiments. The European high-resolution global ocean model was developed in France by Mercator Océan with the financial support of the European MERSEA integrated project for the development, validation, and exploitation of the system and from the Région Midi Pyrénées, which financed a dedicated computer for this project. The mean Gulf Stream northwall pathway, based on satellite infrared imagery, is an unpublished analysis performed by Peter Cornillon (University of Rhode Island) and Ziv Sirkes (deceased) for the ONR project Data Assimilation and Model Evaluation Experiment–North Atlantic Basin (DAMEE–NAB).
References Arbic BK, Wallcraft AJ, Metzger EJ (2010) Concurrent simulation of the eddying general circulation and tides in a global ocean model. Ocean Model 32:175–187 Bane JM Jr, Dewar WK (1988) Gulf Stream bimodality and variability downstream of the Charleston bump. J Geophys Res 93(C6):6695–6710 Barnier B, Madec G, Penduff T, Molines JM, Treguier AM, Le Sommer J, Beckmann A, Biastoch A, Böning C, Dengg J, Derval C, Durand E, Gulev S, Remy E, Talandier C, Theeten S, Maltrud M, McClean J, De Cuevas B (2006) Impact of partial steps and momentum advection schemes in a global ocean circulation model at eddy-permitting resolution. Ocean Dyn 56:543–567. doi:10.1007/s10236-006-0082-1 Barron CN, Smedstad LF, Dastugue JM, Smedstad OM (2007) Evaluation of ocean models using observed and simulated drifter trajectories: impact of sea surface height on synthetic profiles for data assimilation. J Geophys Res 112:C07019. doi:10.1029/2006JC002982 Bleck R (2002) An oceanic general circulation model framed in hybrid isopycnic-cartesian coordinates. Ocean Model 37:55–88 Bleck R, Smith L (1990) A wind-driven isopycnic coordinate model of the north and equatorial Atlantic Ocean. 1. Model development and supporting experiments. J Geophys Res 95:3273– 3285 Bower AS, Hunt HD (2000) Lagrangian observations of the deep western boundary current in the North Atlantic Ocean. Part II. The Gulf stream—deep western boundary current crossover. J Phys Oceanogr 30:784–804 Bryan FO, Hecht MW, Smith RD (2007) Resolution convergence and sensitivity studies with North Atlantic circulation models. Part I: the western boundary current system. Ocean Model 16:141–159 Chassignet EP, Marshall DP (2008) Gulf Stream separation in numerical ocean models. In: Hecht M, Hasumi H (eds) Ocean modeling in an eddying regime, geophysical monograph 177. American Geophysical Union, Washington Chassignet EP, Hurlburt HE, Metzger EJ, Smedstad OM, Cummings JA, Halliwell GR, Bleck R, Baraille R, Wallcraft AJ, Lozano C, Tolman HL, Srinivasan A, Hankin S, Cornillon P, Weisberg R, Barth A, He R, Werner F, Wilkin J (2009) US GODAE: global ocean prediction with the HYbrid Coordinate Ocean Model (HYCOM). Oceanography 22:64–75 Cooper M, Haines KA (1996) Altimetric assimilation with water property conservation. J Geophys Res 24:1059–1077 Cummings JA (2005) Operational multivariate ocean data assimilation. Quart J R Meteor Soc 131:3583–3604 Fox DN, Teague WJ, Barron CN, Carnes MR, Lee CM (2002) The modular ocean data analysis system (MODAS). J Atmos Ocean Technol 19:240–252 Gibson JK, Kallberg P, Uppala S, Hernandez A, Nomura A, Serrano E (1999) ERA ECMWF reanalysis project report series 1. ERA-15 description, version 2. European Centre for MediumRange Weather Forecasts, Reading
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
607
Glenn SM, Ebbesmeyer CC (1994a) The structure and propagation of a Gulf Stream frontal eddy along the North Carolina shelf break. J Geophys Res 99(C3):5029–5046 Glenn SM, Ebbesmeyer CC (1994b) Observations of Gulf Stream frontal eddies in the vicinity of Cape Hatteras. J Geophys Res 99(C3):5047–5055 Godfrey JS (1989) A Sverdrup model of the depth-integrated flow for the world ocean allowing for island circulations. Geophys Astrophys Fluid Dyn 45:89–112 Gordon AL, Giulivi CF, Lee CM, Furey HH, Bower A, Talley L (2002) Japan/East Sea thermocline eddies. J Phys Oceanogr 32:1960–1974 Halkin D, Rossby HT (1985) The structure and transport of the Gulf Stream at 73°W. J Phys Oceanogr 15:1439–1452 Haltiner GJ, Martin FL (1957) Dynamical and Physical Meteorology. McGraw-Hill, New York Hecht MW, Smith RD (2008) Towards a physical understanding of the North Atlantic: a review of model studies in an eddying regime. In: Hecht M, Hasumi H (eds) Ocean modeling in an eddying regime, geophysical monograph 177. American Geophysical Union, Washington Hecht MW, Petersen MR, Wingate BA, Hunke E, Maltrud ME (2008) Lateral mixing in the eddying regime and a new broad-ranging formulation. In: Hecht M, Hasumi H (eds) Ocean modeling in an eddying regime, geophysical monograph 177. American Geophysical Union, Washington Hellerman S, Rosenstein M (1983) Normal monthly wind stress over the world ocean with error estimates. J Phys Oceanogr 13:1093–1104 Hogan PJ, Hurlburt HE (2006) Why do intrathermocline eddies form in the Japan/East Sea? A modeling perspective. Oceanography 19:134–143 Hogg NG, Stommel H (1985) On the relation between the deep circulation and the Gulf Stream. Deep-Sea Res 32:1181–1193 Hurlburt HE (1986) Dynamic transfer of simulated altimeter data into subsurface information by a numerical ocean model. J Geophys Res 91(C2):2372–2400 Hurlburt HE, Hogan PJ (2000) Impact of 1/8° to 1/64° resolution on Gulf Stream model-data comparisons in basin-scale subtropical Atlantic Ocean models. Dyn Atmos Ocean 32:283–329 Hurlburt HE, Hogan PJ (2008) The Gulf Stream pathway and the impacts of the eddy-driven abyssal circulation and the Deep Western Boundary Current. Dyn Atmos Ocean 45:71–101 Hurlburt HE, Thompson JD (1980) A numerical study of Loop Current intrusions and eddy shedding. J Phys Oceanogr 10:1611–1651 Hurlburt HE, Thompson JD (1982) The dynamics of the Loop Current and shed eddies in a numerical model of the Gulf of Mexico. In: Nihoul JCJ (ed) Hydrodynamics of semi-enclosed seas. Elsevier, Amsterdam Hurlburt HE, Wallcraft AJ, Schmitz WJ Jr, Hogan PJ, Metzger EJ (1996) Dynamics of the Kuroshio/Oyashio current system using eddy-resolving models of the North Pacific Ocean. J Geophys Res 101(C1):941–976 Hurlburt HE, Chassignet EP, Cummings JA, Kara AB, Metzger EJ, Shriver JF, Smedstad OM, Wallcraft AJ, Barron CN (2008a) Eddy-resolving global ocean prediction. In: Hecht M, Hasumi H (eds) Ocean modeling in an eddying regime, geophysical monograph 177. American Geophysical Union, Washington Hurlburt HE, Metzger EJ, Hogan PJ, Tilburg CE, Shriver JF (2008b) Steering of upper ocean currents and fronts by the topographically constrained abyssal circulation. Dyn Atmos Ocean 45:102–134. doi:10.1016/j.dynatmoce.2008.06.003 Hurlburt HE, Brassington GB, Drillet Y, Kamachi M, Benkiran M, Bourdallé-Badie R, Chassignet EP, Jacobs GA, Le Galloudec O, Lellouche JM, Metzger EJ, Oke PR, Pugh TF, Schiller A, Smedstad OM, Tranchant B, Tsujino H, Usui N, Wallcraft AJ (2009) High-resolution global and basin-scale ocean analyses and forecasts. Oceanography 22:110–127 Johns WE, Shay TJ, Bane JM, Watts DR (1995) Gulf Stream structure, transport, and recirculation near 68°W. J Geophys Res 100:817–838 Joyce TM, Wunsch C, Pierce SD (1986) Synoptic Gulf Stream velocity profiles through simultaneous inversion of hydrographic and acoustic Doppler data. J Geophys Res 91:7573–7585 Kallberg P, Simmons A, Uppala S, Fuentes M (2004) ERA-40 project report series: 17. The ERA40 archive. ECMWF. Reading
608
H. E. Hurlburt et al.
Kara AB, Hurlburt HE, Wallcraft AJ (2005) Stability-dependent exchange coefficients for air-sea fluxes. J Atmos Ocean Technol 22:1080–1094 Kara AB, Wallcraft AJ, Martin PJ, Pauley RL (2009) Optimizing surface winds using QuikSCAT measurements in the Mediterranean Sea during 2000–2006. J Mar Sys 78:119–131 Large WG and Pond S (1981) Open ocean momentum flux measurements in moderate to strong winds. J Phys Oceanogr 11(3):324–336 Lee H (1997) A Gulf Stream synthetic geoid for the TOPEX altimeter. MS Thesis Rutgers University, New Brunswick Lee T, Cornillon P (1996) Propagation of Gulf Stream meanders between 74° and 70°W. J Phys Oceanogr 26:205–224 Legeckis R, Brown CW, Chang PS (2002) Geostationary satellites reveal motions of ocean surface fronts. J Mar Sys 37:3–15 Legeckis RV (1979) Satellite observations of the influence of bottom topography on the seaward deflection of the Gulf Stream off Charleston, South Carolina. J Phys Oceanogr 9:483–497 Madec G (2008) NEMO ocean engine. Report 27 ISSN No 1288–1619. Institute Pierre-Simon Laplace (IPSL), France Munk WH (1950) On the wind-driven ocean circulation. J Met 7:79–93 Paiva AM, Hargrove JT, Chassignet EP, Bleck R (1999) Turbulent behavior of a fine mesh (1/12°) numerical simulation of the North Atlantic. J Mar Sys 21:307–320 Pickart RS (1994) Interaction of the Gulf Stream and Deep Western Boundary Current where they cross. J Geophys Res 99:25155–25164 Pickart RS, Watts DR (1990) Deep Western Boundary Current variability at Cape Hatteras. J Mar Res 48:765–791 Reid RO (1972) A simple dynamic model of the Loop Current. In: Capurro LRA, Reid JL (eds) Contributions on the Physical Oceanography of the Gulf of Mexico. Gulf Publishing Co, Houston Rosmond TE, Teixeira J, Peng M, Hogan TF, Pauley R (2002) Navy operational global atmospheric prediction system (NOGAPS): forcing for ocean models. Oceanography 15(1):99–108 Rossby CG (1940) Planetary flow patterns in the atmosphere. Quart J R Meteor Soc 66:68–87 Rossby T, Flagg CN, Donohue K (2005) Interannual variations in upper-ocean transport by the Gulf Stream and adjacent waters between New Jersey and Bermuda. J Mar Res 63:203–226 Schmitz WJ Jr (1996) On the world ocean circulation: Volume I. Some global features/North Atlantic circulation. Technical Report WHOI-96–03 Woods Hole Oceanographic Institution, Woods Hole Schmitz WJ Jr, McCartney MS (1993) On the North Atlantic circulation. Rev Geophys 31:29–49 Shriver JF, Hurlburt HE, Smedstad OM, Wallcraft AJ, Rhodes RC (2007) 1/32° real-time global ocean prediction and value-added over 1/16° resolution. J Mar Sys 65:3–26 Smedstad OM, Hurlburt HE, Metzger EJ, Rhodes RC, Shriver JF, Wallcraft AJ, Kara AB (2003) An operational eddy resolving 1/16° global ocean nowcast/forecast system. J Mar Sys 40– 41:341–361 Smith RD, Maltrud ME, Bryan FO, Hecht MW (2000) Numerical simulation of the North Atlantic Ocean at 1/10°. J Phys Oceanogr 30:1532–1561 Sverdrup HU (1947) Wind-driven currents in a baroclinic ocean—with application to the equatorial currents of the eastern Pacific. Proc Natl Acad Sci U S A 33:318–326 Thompson JD, Schmitz WJ Jr (1989) A regional primitive-equation model of the Gulf Stream: design and initial experiments. J Phys Oceanogr 19:791–814 Townsend TL, Hurlburt HE, Hogan PJ (2000) Modeled Sverdrup flow in the North Atlantic from 11 different wind stress climatologies. Dyn Atmos Ocean 32:373–417 Tsujino H, Usui N, Nakano H (2006) Dynamics of Kuroshio path variations in a high-resolution general circulation model. J Geophys Res. doi:10.1029/2005JC003118 Usui N, Tsujino H, Fujii Y (2006) Short-range prediction experiments of the Kuroshio path variabilities south of Japan. Ocean Dyn 56:607–623 Usui N, Tsujino H, Fujii Y, Kamachi M (2008a) Generation of a trigger meander for the 2004 Kuroshio large meander. J Geophys Res. doi:10.1029/2007JC004266
21â•… Dynamical Evaluation of Ocean Models Using the Gulf Stream as an Example
609
Usui N, Tsujino H, Nakano H, Fujii Y (2008b) Formation process of the Kuroshio large meander in 2004. J Geophys Res. doi:10.1029/2007JC004675 Watts DR, Tracey KL, Bane JM, Shay TJ (1995) Gulf Stream path and thermocline structure near 74°W and 68°W. J Geophys Res 100:18291–18312 Xie L, Liu X, Pietrafesa LJ (2007) Effect of bathymetric curvature on Gulf Stream instability in the vicinity of the Charleston Bump. J Phys Oceanogr 37(3):452–475
Chapter 22
Ocean Forecasting Systems: Product Evaluation and Skill Matthew Martin
Abstract╇ The evaluation of output from ocean forecasting systems is important in order to inform users how much confidence can be placed in the products, and helps identify areas for improvement in the systems. An overview of the statistical methods which can be used to perform the evaluation is provided. Examples of some commonly used methods from various GODAE systems are given, including evaluation of large-scale model performance, the use of output from data assimilation systems, the use of independent data, and comparison of forecasts with analyses.
22.1â•…Introduction The aim of ocean forecasting systems is to provide information about the past, present and future state of the ocean to a range of users. There are a wide range of applications including defence, ship routing, oil spill prediction, weather forecasting, climate monitoring and scientific research. In order for the outputs produced by ocean forecasting systems to be useful for these applications, the ability of the systems to represent the real world must be assessed. This will inform the users of where and when the products can be used, and with how much confidence. It also aids development of the forecasting systems themselves, highlighting areas where improvements can be made. The state variables from ocean forecasting systems are the sea surface height and the three-dimensional temperature, salinity and currents. For those systems which include sea-ice models, the sea-ice concentration, velocity and thickness are also produced. Other diagnostic quantities such as mixed-layer depth and transports are also of interest to users of model output. The applications which use this ocean information cover a wide range of time and space scales from large-scale climate monitoring and seasonal forecasting applications, which look at evolution over months with basin-wide and global coverage, through to the analysis and prediction M. Martin () Met Office, Fitzroy Road, Exeter, UK e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_22, © British Crown 2011, the Met Office, UK
611
612
M. Martin
of mixed-layer depth over a diurnal cycle. This range of spatial and temporal scales must therefore be taken into account when assessing the products. The amount of information available from ocean forecasting systems is immense—the model state vector usually contains at least of the order of 107 variables at any particular time (and for some ocean forecasting systems significantly more than this). It is usually impossible for users of the output to access all of this data, and so some post-processing is often performed to synthesise this information. This may involve interpolation or averaging in space and time, and may also involve production of other diagnostic information which is of more relevance to a particular user. The impact of the post-processing on the accuracy of the data which are provided must be assessed, usually by assessing the post-processed fields directly. In order to evaluate products from ocean forecasting systems and relate them to the real world, observations are required. These could take the form of climatologies, analyses of satellite data, or raw observed values. In all cases, the accuracy of the observations needs to be assessed and taken into account when performing the comparison between model and observations. It is also important to use observations which have been quality controlled—comparison with “bad” observations can lead to confusing results. A number of aspects affect the quality of the output from ocean forecasting systems. The most obvious is the quality of the model used to produce the forecast, including its horizontal and vertical resolution, and the parametrisations which are used. The surface forcing fields used to drive the model (or in the case of a coupled model, the quality of the atmospheric model) also have a significant impact on the quality of the ocean forecast. For regional models, lateral boundaries can play a significant role. The data assimilation scheme used to initialise the model has a large impact on the accuracy of the forecast—the type and number of observations being used in the assimilation, the assimilation scheme itself, and the quality control of observations all have an impact on the accuracy of the analysis and the subsequent forecast. A review of some statistical concepts which are required to assess model output is given in the next section. A summary of the main issues with the observations available for use in the evaluation of model products is then given. This is followed by some specific examples of product evaluation taken from various GODAE systems. An overall summary is then given.
22.2╅Statistical Concepts A number of statistical measures are required to thoroughly assess the output of ocean forecasting systems. Three different concepts are described here, aimed at determining the accuracy of the analysis and forecast, the ability of the model to represent the patterns of the observations, sometimes termed association (Murphy et€al. 1995), and the skill of the forecast. Representing this information in a concise
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
613
way can be done through some well-known summary diagrams which are briefly described. We assume that there is a verification data set consisting of a set of N observa¯ The model values at the same time and tions, yi , i = 1, . . . , N , with mean value y. location as these observations are denoted xi , i = 1, . . . , N and have mean value x. ¯ The mean difference is a measure of the ability of the model to represent the mean observed state:
MD =
N 1 (xi − yi ). N i=1
(22.1)
22.2.1 Accuracy The accuracy of the forecast is usually assessed using the root mean square differences between the model and observed values: N 1 (22.2) RMSD = (xi − yi )2 . N i=1
22.2.2 Pattern The ability of the model to reproduce the pattern in the observations can be measured using the correlation coefficient:
R=
N 1 (xi − x)(y ¯ i − y) ¯ N i=1
σx σy
,
(22.3)
where σx and σy are the standard deviations of the model and observations respectively. The correlation coefficient provides information about whether the patterns in the model are similar to the patterns of the observations, but not about the amplitude of variation in the two fields. It reaches a value of 1 when the two fields have the same centred pattern of variation, a value of −1 when the two fields vary in the opposite sense to each other, and a value of zero when no correlation exists between the two fields. The square of the correlation coefficient, R 2 , is also a useful quantity as it provides information on the fraction of the variance explained. When the dominant source of variability in a field is a large scale signal, for instance the seasonal cycle, most ocean models would easily reproduce the signal, resulting in high values of R. However, ocean forecasting systems produce infor-
614
M. Martin
mation at smaller temporal and spatial scales. To assess these, it is instructive to calculate the anomaly correlation coefficient:
ACC =
N
i=1 N
i=1
(xi − Ci )(yi − Ci )
(xi − Ci )
2
N
i=1
,
(22.4)
2
(yi − Ci )
which provides information about the ability of the model forecast to reproduce the observational information when the seasonally varying climate signal, denoted C, has been removed.
22.2.3 Skill Determining the skill of a model forecast is dependent on the application and it is not possible to define one skill score that is universally appropriate. A number of scores have been suggested in the literature, some examples of which are given below. The skill of a forecast can be defined as the accuracy of the forecast relative to the accuracy of a reference field such as a climatology or persistence (Murphy 1995). A simple way of measuring this is given by:
SS1 = 1 −
MSD , MSDref
(22.5)
which measures the relative accuracy of the forecast to some reference, where MSD indicates Mean Square Difference (the square of Eq.€ 22.2) and the subscript ref indicates that the model value in Eq.€22.2 has been replaced by a climatology or persistence estimate. A value of 1 implies that the forecast has perfect skill while a value of zero implies no skill. In the above skill score, no account is taken of correlations or bias. Taylor (2001) suggests the following score which is based on the correlation coefficient and the model and observed variances:
SS2 =
4(1 + R)
(σˆ x + 1/σˆ x )2 (1 + Ro )
,
(22.6)
where Ro is the maximum correlation attainable (Ro = 1) and σˆ x = σx /σy is the normalised standard deviation. Another skill score which uses the correlation, variances, and also includes the biases in model and observations, as suggested by Metzger et€al. (2008), is given by:
SS3 = R 2 − [R − (σy /σx )]2 − [(y¯ − x)/σ ¯ x ]2 .
(22.7)
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
615
This skill score is equivalent to that defined in Eq.€22.5 when the reference used is the mean of the observations (Murphy 1988). This way of decomposing the score can be useful because the various contributions can be assessed—the correlation, conditional bias and unconditional bias. For probabilistic forecasting systems, a wide range of skill scores are often used such as the Brier skill score (Brier 1950), or the Relative Operating Characteristics (ROC) score which is used to determine the relationship between the number of events which were correctly forecast to the number of false alarms. These skill scores are widely used in seasonal prediction systems and ensemble weather forecasting systems, but few short-range ocean forecasting systems currently produce ensemble forecasts. These skill scores are not covered in depth here—see Atger (1999) and references therein for further information.
22.2.4 Summary Diagrams In order to characterise the differences between the model and observations it is important to take into account the correspondence in both the patterns and the variances of the two fields. We define the centred pattern RMSD as: N 1 (22.8) CRMSD = [(xi − x) ¯ − (yi − y)] ¯ 2. N i=1 Taylor (2001) noticed that a simple relationship exists between the correlation coefficient, the centred pattern RMS difference, and the variances of the fields in question. The relationship is given by:
CRMSD2 = σx2 + σy2 − 2σx σy R,
(22.9)
which takes the same form as the law of cosines (c2 = a 2 + b2 − 2ab cos (γ )) . This relationship can be used to plot the information about R, CRMSD and the variances in the model and observations as a point on a single diagram. In order to make it possible to compare fields with different units, the statistics can be nondimensionalised by normalising each variable in Eq. (22.8) by the standard deviation in the observed field, which leaves the correlation coefficient unchanged. A schematic Taylor diagram is shown in Fig.€22.1. If the model exactly reproduced the observations, it would lie at the point indicated by the black circle. The distance between this black circle and the actual model point (the blue diamond in this example) represents the CRMSD and the dotted arcs on the diagram represent lines of constant CRMSD. The correlation coefficient is represented on the outer arc of the diagram with increasing correlation with the angle from the y-axis. The normalised standard deviation is represented as the distance to the origin, with a ratio of one denoted by the dashed arc (if the point is closer to the origin the model has lower
616
M. Martin
Fig. 22.1↜渀 Schematic description of a Taylor diagram
UUH
&R
LRQ ODW
6WDQGDUGGHYLDWLRQQRUPDOLVHGZUW2EV
9DULDQFHUDWLR PRGHOREV
&RUUHODWLRQ FRHIILFLHQW &506'
variance than the observations). The power of the Taylor diagram lies in the ability to plot numerous model runs on a single diagram and to compare these various aspects of the models’ performance. One drawback of the Taylor diagram is that the mean error of the models is not accounted for. The so-called Target diagram (Jolliff et€al. 2009) can be used to represent complementary information about the statistical performance of models. In this case, the relationship between the total mean square difference and the unbiased MSD and bias, RMSD2 = MD2 + CRMSD2, is plotted on a diagram where the x-axis represents CRMSD and the y-axis represents the bias. Since CRMSD is a positive quantity by definition, the negative x-axis can be utilised to include information about the standard deviation difference by multiplying the CRMSD by the sign of the standard deviation difference.
22.3â•…Observations Various observation types are available for use in validating and verifying ocean forecasting systems and are detailed elsewhere. Some general points about the use of these data in evaluating model output are outlined here. For satellite data, a number of levels of processing are performed to produce observations of the quantities which are output by ocean models. For instance, sea surface temperature (SST) data undergoes various levels of data processing from the level 1 brightness temperatures measured by the satellites, through the level 2 conversion to sea surface temperature at the native resolution, the level 3 re-gridding of the data, through to the level 4 objective analyses. Each level
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
617
of processing affects the accuracy and representativeness of the data so it is important to be clear what the observations are representing before using them for evaluation. It is important to recognise that the observations used in model evaluation are not themselves perfect representations of the true state of the ocean. Measurement techniques will introduce some error into the observations. The observations are also usually made at a specific location whereas the model represents an area-average value. This means that the model cannot represent all processes affecting the observations. These errors of representativity, as well as increasing the apparent error in the observations, can lead to correlated errors which will affect the interpretation of the resulting statistics. All of these errors should therefore be taken into account when assessing the results of any model-observation comparison. As well as the errors described in the previous paragraph, observations often report erroneous values. This can happen for a number of reasons such as misreporting of location, corruption of the observation during transmission, or instrument error. One or two bad observations can significantly impact the results of any validation/verification, so it is important that a thorough check on the quality of the data is performed prior to the evaluation. This quality control can be performed in a number of ways, but usually consists of a comparison between the observations and some reference field either from a model forecast, or from observed climatology (see for example Ingleby and Huddleston 2007).
22.4╅Evaluating Ocean Analyses and Forecasts The usual process for developing a new ocean forecasting system, or significant upgrades to an existing system, involves a number of stages. Scientific developments will be tested individually to ensure that they are producing the expected change in the system. Once a number of developments are available, they will be put together into a new version of the system and this must then be thoroughly evaluated during the validation phase. This validation is usually done by means of the evaluation of a set of hindcasts of the system, where the system is run over a multi-annual period in the past. This tests that the overall changes to the system produce the expected improvements. Once the validation has been carried out, the system can be implemented operationally. At this stage it is important to continuously assess and monitor the accuracy of the system using a verification system. The results of both the validation and verification are useful for providing information to users of the system about the expected accuracy. User-specific evaluations can also be carried out to assess the suitability of the system for a given application. A number of examples of evaluating ocean forecasting systems are given below taken from various sources (e.g. Ferry et€al. 2007; Oke et€al. 2008; Metzger et€al. 2008, 2009; Storkey et€al. 2010), providing illustrations of some commonly used methods. The advantages and shortcomings of each method are outlined.
618
M. Martin
22.4.1 Evaluating the Large-Scale Mean and Variability It is important to check that the average properties of the ocean forecasting systems are providing a good representation of the ocean climate. This is usually done by comparing multi-annual averages to climatologies generated from observational data-sets. One example of this is a comparison between a mean dynamic topography (MDT, such as that of Rio et€al. 2005; or Maximenko and Niiler 2005), with the model’s average sea surface height field. This provides a useful guide as to the ability of the model to represent the large scale ocean circulation (see for example Metzger et€al. 2008). Temperature and salinity can also be assessed using a suitable climatological data-set. In Fig.€22.2, the annual mean temperature anomalies from the World Ocean
– 1000
depth (m)
depth (m)
– 1000 – 2000 – 3000
– 60
– 40
– 20
0
20
latitude (deg)
depth (m)
depth (m)
– 3000 – 4000
– 20
0
20
latitude (deg)
– 2000 – 3000 – 4000
– 50
0
50
– 50
d
latitude (deg)
0
latitude (deg)
50
depth (m)
– 1000
depth (m)
– 1000 – 2000 – 3000 – 4000
0
– 40
– 1000
– 2000
e
– 60
b
– 1000
c
– 3000 – 4000
– 4000
a
– 2000
– 2000 – 3000 – 4000
100
200
longitude (deg)
300
f
0
100
200
300
longitude (deg)
Fig. 22.2↜渀 Annual mean temperature anomalies from WOA05 climatology for 2008 from FOAM both without (a, c, e) and with (b, d, f) data assimilation. a, b Show a cross-section in the Indian Ocean along 90°E. c, d Show a cross-section in the Atlantic Ocean along 30°W. e, f Show a crosssection along the equator
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
619
Atlas 2005 (Locarnini et€al. 2006) are shown as cross-sections from two hindcasts of the ¼° resolution global FOAM system, one without assimilation and one with. This shows that the data assimilation is able to reduce the drifts of the model away from climatology. One has to be careful when performing these comparisons that any inter-annual signal is not contaminating the results. For example in Fig.€22.2f, there is a clear La Nina signal, where the model is representing the true deviation from climatology. The variability in the model and observations can also be assessed. For instance, sea surface height (SSH) can be used to measure the amount of mesoscale activity. This can be estimated from observations provided by satellite altimeters, and also from ocean models. Figure€22.3 shows an example of this from the GLORYS reanalysis produced by Mercator using the ¼° resolution NEMO model with data assimilation. The standard deviations of the data are shown next to the standard deviation of the model fields from a 6 year period. Here, the data used to calculate the observed variability are being assimilated in the reanalysis product so this test is only useful to check that the assimilation of data is working correctly. The model analyses are reproducing the observed variability very well, including the western 80°N
Latitude
40°N
0°
40°S
80°S
0°
100°E
a
160°W
60°W
Longitude 80°N
0.3 0.28 0.26 0.24 0.22 0.2 0.18 0.16 0.14 0.12 0.1 0.08 0.06 0.04 0.02
Latitude
40°N
Fig. 22.3↜渀 Standard deviation of SSH for the period 2002–2007 from a Aviso data and b from the GLORYS reanalysis product
0°
40°S
80°S
b
0
0°
100°E
160°W Longitude
60°W
620
M. Martin
boundary currents which are difficult areas to accurately represent the mesoscale variability with ¼° resolution. The only regions where the model variability is significantly different to the observed are in the Zapiola Rise region and in parts of the South Pacific. Both the average and variability comparisons described above are useful as a first-order check on the ability of the model to represent the large-scale ocean features, and can give confidence that the model is behaving as expected. However, they do not give information about the accuracy or skill of the model and so are of limited use to most users. More detailed investigations are required for this, and are described below.
22.4.2 Data Assimilation Statistics In the data assimilation process, the observation operator h is used to interpolate the model forecast field xf to the location in time and space of the observations, y. This enables calculation of the innovations, d = [y − h(xf )]. Once the data assimilation has been performed it is also possible to calculate the equivalent using the analysis field to produce the residuals, r = [y − h(xa )]. The reduction in the errors between the analysis and the forecast can be used as an a posteriori check that the data assimilation process is working as expected, and is fitting the observations to within their error (see for example Cummings 2005). The increments generated through the data assimilation process also provide an important source of information. The time-average of these increments can indicate areas of significant model bias. However, it is not always obvious how to diagnose the source of these biases. For validation and verification of the model forecast, it is the innovation statistics that are of most interest, as they provide a pseudo-independent check on its accuracy. The observations being used for this comparison have not previously been assimilated so from that point of view are independent. However, previous observations of the same type will have been assimilated on previous data assimilation cycles so they cannot be viewed as completely independent. An example of the innovation statistics from a 2-year reanalysis using the ¼° global FOAM system (Storkey et€al. 2010) is shown in Fig.€22.4. This includes the mean and RMS of the innovations for SSH and for temperature. The mean errors show that the system is able to represent the global average observed SSH and temperature well, although a small positive temperature bias exists below the top 50€m, with a cold bias above this depth for most of the period. The RMS of the innovations provides a measure of the overall accuracy of the system both as a function of time, and of depth (for temperature). The maximum of the global temperature errors is located within the top 200€m with much smaller errors below this depth. These timeseries plots also illustrate the stability of the system, with the SSH being relatively stable, whereas the temperature RMS errors appear to have a seasonal cycle with smaller errors in Northern hemisphere winter.
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
621
0.15
Sea surface height error (m)
0.10
0.05
0.00
– 0.05
Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Jan
a
2007
2008
2009
0
Depth (m)
200 400 600 800 1000
Apr 07
Jan 07
b
– 0.5
– 0.4
Oct 07
Jul 07 – 0.3
– 0.2
– 0.1
Jan 08 0
Apr 08 0.1
0.2
Jul 08 0.3
Oct 08 0.4
0.5
Mean error
0
Depth (m)
200 400 600 800 1000
Apr 07
Jan 07
c
0
0.2
Oct 07
Jul 07 0.4
0.6
0.8
Jan 08 1
Apr 08 1.2
1.4
Jul 08 1.6
Oct 08 1.8
2
RMS error
Fig. 22.4↜渀 a Mean (↜dotted line) and RMS (↜solid line) of the innovations for SSH. b Mean of the temperature innovations as a function of depth and time. c RMS of the temperature innovations as a function of depth and time for the FOAM reanalysis for the period January 2007–December 2008
622
M. Martin
An example of the use of Taylor diagrams for plotting innovation statistics is shown in Fig.€ 22.5 with results from a hindcast run of the ¼° resolution global FOAM system (Storkey et€al. 2010). This shows the statistics for a number of different regions for both SST and SSH. The SST statistics are only shown for a comparison with the AATSR data, although other satellite SST data were assimilated. The variability in both these variables is well-reproduced by the model in all regions, but the correlations and RMS differences are clearly regionally dependent with the Mediterranean region having the largest RMS errors and lowest correlation coefficient.
22.4.3 E valuation of Analyses and Forecasts Using Independent Data In operational assimilation systems, the aim is to provide the best possible estimate of the ocean state, and so all available data are assimilated. However, some data-sets are not available in real-time and so can be used in delayed mode to validate the results. An example of this is the RAPID array which measures sub-surface ocean properties in the North Atlantic in order to produce estimates of the Atlantic Meridional Overturning Circulation (AMOC). Qualitative inter-comparisons can be made between ocean model output of SSH and satellite ocean colour data (see for example Storkey et€al. 2010). These can help to show the performance of the systems in reproducing the position of mesoscale eddies and fronts, but it is difficult to produce robust quantitative statistics using this sort of technique. A method which is often used to validate ocean models in a hindcast setting is to withhold certain data from the data assimilation, and use this independent data for validating the results. This is a useful technique as it provides an independent check that the data assimilation system is working as expected. It is not possible to use this to assess the overall accuracy of the system as the unassimilated data would be assimilated in the operational system, but it can give a bound on the expected accuracy. An example of this technique is shown in Oke et€al. 2008 which shows results from the Bluelink Reanalysis system. Here some unassimilated Argo profiles are used to assess the RMSD in the assimilation run and the run without data assimilation. In all regions at almost all depths, the assimilation is improving the model’s representation of sub-surface temperature when compared to the non-assimilating model. Some data-sets provide information about variables which are not assimilated in most ocean forecasting systems at present. For instance, most of the current operational forecasting systems do not assimilate velocity data. Direct measurements of velocity are sparse, but there are some data in the tropical moorings and other time-series stations. There are also measurements of velocity from surface drifting
22â•… Ocean Forecasting Systems: Product Evaluation and Skill 0.1
1.5
0.2
0.3
0
0.7
0
1.4
1.0 0.8 20
1.
0.9
0.5
0.95
0.20
0.40
0.60
0.80
0.0 0.0
0.5 1.0 1.5 Dotted lines denote centred RMS (normalised wrt Obs) 0.1
0.2
0.3
0
1.6
Co
rre lati on 0.4 0.5 0.6 0.7
0
1.4
1.0 0.8 20
1.
0.9
0.5
0.95
+ Global North Atlantic Mediterranean Tropical Atlantic South Atlantic North Pacific Tropical Pacific South Pacific Indian Ocean Southern Ocean Arctic Stereo NATL12 IND12 MED12
0.20
0.60
0.0 0.0
0.80
0.99 1.00
Standard deviation (normalised wrt Obs)
+ Global North Atlantic Mediterranean Tropical Atlantic South Atlantic North Pacific Tropical Pacific South Pacific Indian Ocean Southern Ocean Arctic Stereo NATL12 IND12 MED12
0.99
1.5
b
rre lati on 0.4 0.5
0.40
a
Co
0.6
1.00
Standard deviation (normalised wrt Obs)
1.6
623
0.5 1.0 1.5 Dotted lines denote centred RMS (normalised wrt Obs)
Fig. 22.5↜渀 Taylor diagrams from a 2-year hindcast of the FOAM system for a SST comparison with AATSR data and b SSH comparison with a long-track altimeter data. The different colours and symbols represent the statistics for different geographical regions
624
M. Martin
buoys and these provide near-global coverage. These can be used as an independent check on the surface ocean currents, an important variable for a number of users. Surface drifters consist of a surface buoy which is attached to a subsurface drogue. This drogue is usually centred at 15€ m depth. The buoy measures temperature (and sometimes other ocean/atmosphere properties) and the position of the drifter is usually inferred from satellite transmission information. The SST data and position of the drifter are disseminated via the global telecommunications system (GTS). Three months of data from 1st January–31st March 2006 were quality controlled by checking the SST against climatology using a Bayesian technique, and by checking that the average daily velocity of the floats did not exceed 2€ m/s. The daily mean velocity values from drifter data were calculated by estimating the distance in the latitudinal and longitudinal directions between the first and last float positions during each day, and dividing the distance by the difference in their reporting time. The modelled velocity corresponding to the observed velocity was calculated by interpolating the model’s daily mean velocity fields to all of the observed drifter locations using a bilinear interpolation, and averaging the values for each day. There are a number of issues with estimated velocities from surface drifters, for example aliasing of inertial oscillations, inaccuracy of position data, unknown drogue depths, un-drogued data and different reporting frequencies. The technique described in the previous paragraph also introduces errors as the curvature in the path of the drifter is not taken into account. Other techniques for comparing the model and observed velocities exist. For example one could input the starting position for each drifter on a particular day, run the model forward to estimate its position at the end of the day, and compare that with the final observed position of the drifter. Statistics on these position errors could then be calculated and assessed. Various experiments were performed with the 1/9° resolution FOAM North Atlantic system (as it was in 2006, see Martin et€al. 2007 for details) in order to assess the impact of different aspects of the system on the surface currents. Figure€22.6 shows the Taylor diagrams for a sample of these experiments for the u and v components of the velocities in the North Atlantic. The first experiment (in light blue) was a re-run of the operational FOAM system which shows that the variability in the model was close to the observed variability but that the correlation was very low with a fairly high RMSD. When not assimilating altimeter data (dark blue), the model’s variability is much less, but the correlation coefficient is even worse. This implies that the altimeter assimilation is adding in variability to the model which is not naturally included in the model. One way of getting round this problem is to increase the viscosity in the model so that any spurious variability is damped. The results from a run of FOAM with an increased viscosity are shown in green. For comparison, the results from HYCOM and Mercator (as they were in 2006) are also shown in yellow and orange respectively. This shows improvements in the correlation and reduced RMSD compared to the other FOAM runs, giving similar results to HYCOM and Mercator.
22â•… Ocean Forecasting Systems: Product Evaluation and Skill 1.5
0.1
0.2
0.3
Co
rre
625
lat
ion
0.4
Standard deviation (normalised wrt Obs)
0.5 0.6 0.7 1.0
0.8
0.9 0.5
0.95
0.99
a
0.0 0.0 1.5
0.5 0.1
0.2
0.3
1.0 Co
rre
0.4
1.5
lat
ion
Standard deviation (normalised wrt Obs)
0.5 0.6 0.7 1.0 0.8
0.9 0.5
0.95
0.99
b
0.0 0.0
0.5
1.0
1.5
Fig. 22.6↜渀 Taylor diagrams for the a u and b v components of surface currents for various model runs during the period 1st January–31st March 2006 compared to velocity from surface drifters. Dark blue—FOAM with no altimeter assimilation; light blue—FOAM with altimeter assimilation; green— FOAM with altimeter assimilation and increased viscosity; yellow—HYCOM; orange—Mercator
626
M. Martin
22.4.4 Forecast Versus Analysis In order to assess the forecasts from ocean models, one can assume that the analysis produced by the data assimilation is providing a “best estimate”. The subsequent forecast can be compared against the analysis (at the correct time), and the differences between these fields can be used, over a large number of realisations, to assess the skill in the model forecast. Various statistics can be calculated based on these differences; the most commonly used are RMSD, mean and anomaly correlations, as described previously. It should be noted that these do not give the overall magnitude of the errors, as the analysis errors are not included, but they do provide information about the evolution of errors in time. The analysis errors should be computed separately (as described previously) and used in conjunction with these errors to provide information about the overall error in the forecasts. An example of the growth in the SSH forecast errors from the HYCOM/NCODA system (Metzger et€ al. 2009) is shown in Fig.€ 22.7 for various regions. Here, the median ACC and RMSD statistics are plotted as a function of forecast length out to 14 days. Globally, the model forecasts clearly have higher ACC and lower RMSD than the persistence forecasts throughout the 14-days. The picture is slightly different when looking at particular regions however. For instance, in the Kuroshio region, the forecast model is not providing much more skill than persistence due to the fact that the flow is dominated by mesoscale flow instabilities (rather than being dependent on the atmospheric forcing), although both forecast and persistence are more accurate than climatology throughout the period. In the Yellow Sea region where the ocean responds rapidly to the atmospheric forcing, a persistence forecast quickly becomes no better than climatology, whereas the forecast retains some skill out to at least 5 days. Another example of comparing forecasts with analyses is shown in Fig.€ 22.8 which shows August 2009 monthly average 5-day temperature forecast-analysis differences from the ¼° global FOAM system at 25 and 50€m depths. A number of features are apparent in these figures, but we focus here on the main broad-scale signal: at 25€m depth there is a clear negative bias in the northern mid-to-high latitudes, with a corresponding warm anomaly at 50€m depth. This dipole pattern indicates that heat is being mixed too vigorously in the model. This suggests that either the wind forcing is too strong, or the mixing scheme in the model is not representing the real-world mixing correctly. It is possible to independently validate the wind forcing, for example using scatterometer data. In this case it is thought that the main problem lies with the model’s mixing scheme, so the focus of model development here will be to improve this aspect of the model.
22.4.5 Case Studies for Particular Applications As described previously, ocean forecasting systems serve a large number of users. Among the most significant of these are the Navies, who are interested in a number
SSH anomaly correlation
22â•… Ocean Forecasting Systems: Product Evaluation and Skill .08˚ Global HYCOM (82.4) Median SSH Anomaly Correlation wldocn: number of forecasts 48 (01–Jun–2007 to 22–May–2008)
1
.08˚ Global HYCOM (82.4) SSH Median RMS error wldocn: number of forecasts 48 (01–Jun–2007 to 22–May–2008) 10
forecast
0.9
627
persistence
0.8
5
climatology forecast
0.7 0.6
SSH anomaly correlation
a
persistence 0
1
15
b
0
5
10
15
forecast length (in days) .08˚ Global HYCOM (82.4) SSH Median RMS error kurosh: number of forecasts 48 (01–Jun–2007 to 22–May–2008)
.08˚ Global HYCOM (82.4) Median SSH Anomaly Correlation kurosh: number of forecasts 48 (01–Jun–2007 to 22–May–2008) forecast
0.9
persistence
10
0.8
climatology
5
0.7
forecast persistence
c SSH anomaly correlation
10 forecast length (in days)
0.6 0
1
5
10 forecast length (in days)
15
.08˚ Global HYCOM (82.4) Median SSH Anomaly Correlation nwarab: number of forecasts 48 (01–Jun–2007 to 22–May–2008)
0
d
15
6
persistence
4
0.7
2
5
10
15
forecast length (in days)
1
10 5 forecast length (in days)
.08˚ Global HYCOM (82.4) SSH Median RMS error nwarab: number of forecasts 48 (01–Jun–2007 to 22–May–20 08)
8
0.8
e
0
forecast
0.9
0.6 0
SSH anomaly correlation
5
0
0
f
climatology forecast persistence 0
5
10
15
forecast length (in days)
.08˚ Global HYCOM (82.4) Median SSH Anomaly Correlation yelsea: number of forecasts 48 (01–Jun–2007 to 22–May–2008)
.08˚ Global HYCOM (82.4) SSH Median RMS error yelsea: number of forecasts 48 (01–Jun–2007 to 22–May–2008) 15
forecast
0.9
persistence
10
0.8
climatology
5
0.7
forecast persistence
0.6 0
g
5
10
forecast length (in days)
15
0
h
0
5
10
15
forecast length (in days)
Fig. 22.7↜渀 Median SSH anomaly correlation (↜left column) and median SSH RMSD (↜right column) against the verifying analysis as a function of forecast length for the global ocean (entire domain— top row), the Kuroshio (120–179°E, 21–55°N—second row), the northwest Arabian Sea (51–65°E, 15–26°N—third row) and the Yellow Sea (118–127°E, 30–42°N—bottom row). The red curves are HYCOM/NCODA forecasts, the cyan curves are for persistence of the nowcast and the black curves of RMSE are for the hindcast annual mean
of different outputs including information about sound speed in the ocean in order to model the acoustics (Metzger et€al. 2008, 2009). In order to produce accurate sound speed estimates, the temperature and salinity fields must be accurately determined, with the mixed-layer depth (MLD) and sonic-layer depth (SLD, Millero and Li 1994) of particular interest (amongst other parameters).
628
M. Martin
Fig. 22.8↜渀 Monthly average 5-day forecast temperature differences for FOAM for August 2009, compared with analyses at a 25€m and b 50€m depth
Metzger et€al. (2008, 2009) investigate the accuracy of the MLD and SLD forecasts in the HYCOM/NCODA system used by the US Navy. An example of this validation is shown in Fig.€22.9 which shows the mean and RMS errors in SLD as a function of forecast time for three regions. This shows that the model forecast and persistence are both producing more accurate estimates of SLD than is available from climatological estimates throughout the 14-day forecast. The skill of the model is generally similar to that of persistence, although this result is regionally dependent. The RMS errors generally show a large amount of variability which is most likely due to vertical interpolation errors, and could also be due to observation sampling issues.
22â•… Ocean Forecasting Systems: Product Evaluation and Skill 65 60 RMSE (m)
0 – 10 – 20
50
– 30
40
20
65
10
60
0 – 10 – 20
55 50
– 30
45
– 40
40 35
10 0
RMSE (m)
Mean Error (m)
55
45
RMSE (m)
Mean Error (m)
Mean Error (m)
10
– 10 – 20 – 30
629
2
4 6 8 10 12 Forecast length (days)
14
30
25 20
2
4 6 8 10 12 Forecast length (days)
14
Fig. 22.9↜渀 Error analysis of sonic layer depth (metre) as a function of forecast length based on 48 14-day forecasts by HYCOM/NCODA for regions MER4d (↜top), the western Pacific (↜middle) and the Arabian Sea (↜bottom). The left column shows mean error and the right column shows RMSD. The black curves are for HYCOM/NCODA forecasts, the blue curves are for persistence of the nowcast ocean state and the red curves are for the GDEM3 climatology. Note the y-axis differs between most plots
22.5â•…Summary and Conclusions An overview of methods which can be used to evaluate the accuracy and skill of ocean forecasts has been presented. Various statistical methods which can be used to perform evaluations have been defined, together with some useful diagrams for summarising related statistical information. A discussion on the importance of knowledge about the accuracy and quality of the observations used in the evaluations has also been given. Some examples of the application of the various statistical measures to GODAE ocean forecasting systems have been given. These were used to highlight the need to evaluate the ability of the model to reproduce the large-scale ocean circulation, the accuracy of the analyses, and the accuracy of the subsequent forecasts. The use
630
M. Martin
of independent data in assessing analyses and forecasts has also been presented, as has an example of validation directed at a particular user need. Various techniques which could be used to evaluate ocean forecasting systems have not been described in detail for different reasons. For example, it is possible to estimate a formal error estimate of the analysis using the Hessian of the cost function in variational data assimilation schemes. However, this is an expensive quantity to calculate and the output of the calculation is dependent on the input error covariance information which is usually not well known. For these reasons, it is not usually provided as an analysis error estimate. Similarly, for systems which run an ensemble of forecasts, the spread in the forecasts can be used to provide an estimate of the confidence which should be placed in the forecasts. The uncertainty in the initial conditions and the processes and parameterisations which are modelled can be sampled and the spread of the forecasts can then give statistical information on how much confidence should be placed in certain regions. However, the way in which the uncertainties in the system are sampled has a significant impact on the resulting forecast error estimates, and few operational ocean forecasting systems run an ensemble prediction system at present. Inter-comparison with other ocean forecasting systems can also provide useful information about the skill of a particular ocean forecasting system and insight into weaknesses that can easily be corrected. For more information on this subject, the reader is directed to the separate paper on inter-comparison methods. The evaluation of ocean forecast products is an important aspect of all the GODAE systems, and is continually being improved. It is hoped that common verification statistics will be produced routinely by all the systems over the coming years which will drive improvements to the systems themselves, and will also provide further insight into the most appropriate methods for their evaluation. Acknowledgements╇ The author would like to thank Joe Metzger, Nicolas Ferry and Peter Oke for their permission to reproduce results here. The author also gratefully acknowledges the FOAM team for input and useful discussions. The FOAM system was developed for the Royal Navy, and under the MERSEA and MyOcean Projects—partial support of the European Commission under Contracts SIP3-CT-2003-502885 and FP7-SPACE-2007-1 is gratefully acknowledged.
References Atger F (1999) The skill of ensemble prediction systems. Mon Weather Rev 127:1941–1953 Brier GW (1950) Verification of forecasts expressed in terms of probability. Mon Weather Rev 78:1–3 Cummings JA (2005) Operational multivariate ocean data assimilation. Q J R Meteorol Soc 131:3583–3604 Ferry N, Rémy E, Brasseur P, Maes C (2007) The Mercator global ocean operational analysis system: assessment and validation of an 11-year reanalysis. J Mar Syst 65:540–560 Ingleby NB, Huddleston MR (2007) Quality control of ocean temperature and salinity profiles— historical and real-time data. J Mar Syst 65:158–175
22â•… Ocean Forecasting Systems: Product Evaluation and Skill
631
Jolliff JK, Kindle JC, Shulman I, Penta B, Friedrichs MAM, Helber R, Arnone R (2009) Summary diagrams for coupled hydrodynamic-ecosystems model skill assessment. J Mar Syst 76:64–82 Locarnini RA, Mishonov AV, Antonov JI, Boyer TP, Garcia HE (2006) World Ocean Atlas 2005. In: Levitus S (ed) NOAA Atlas NESDIS 61. US Government Printing Office, Washington, p€182 Martin MJ, Hines A, Bell MJ (2007) Data assimilation in the FOAM operational short-range ocean forecasting system: a description of the scheme and its impact. Q J R Meteorol Soc 133:981– 995 Maximenko NA, Niiler PP (2005) Hybrid decade-mean global sea level with mesoscale resolution. In: Saxena N (ed) Recent advances in marine science and technology, 2004. PACON International, Honolulu, pp€55–59 Metzger EJ, Hurlburt HE, Wallcraft AJ, Shriver JF, Smedstad LF, Smedstad OM, Thoppil P, Franklin DS (2008) Validation test report for the Global Ocean Prediction System V3.0—1/12° HYCOM/NCODA: Phase I. Memorandum report No. NRL/MR/7320-08-9148, Naval Research Laboratory, Oceanography Division, Stennis Space Center, MS 39529-5004 Metzger EJ, Hurlburt HE, Wallcraft AJ, Shriver JF, Townsend TL, Smedstad OM, Thoppil P, Franklin DS (2009) Validation test report for the Global Ocean Forecast System V3.0—1/12° HYCOM/NCODA: Phase II. Memorandum report No. NRL/MR/7320-09-9236, Naval Research Laboratory, Oceanography Division, Stennis Space Center, MS 39529-5004 Millero FJ, Li X,(1994) Comments on “On equations for the speed of sound in seawater”. J Acoust Soc Am 95:2757–2759 Murphy AH (1988) Skill scores based on the mean square error and their relationships to the correlation coefficient. Mon Weather Rev 116:2417–2424 Murphy AH (1995) The coefficients of correlation and determination as measures of performance in forecast verification. Weather Forecast 10:681–688 Oke PR, Brassington GB, Griffin DA, Schiller A (2008) The Bluelink Ocean Data Assimilation System (BODAS). Ocean Model 21:46–70 Rio MH, Schaeffer P, Hernandez F, Lemoine JM (2005) The estimation of the ocean Mean Dynamic Topography through the combination of altimetric data, in-situ measurements and GRACE geoid: from global to regional studies. Proceedings of the GOCINA international workshop, Luxembourg Storkey D, Barciela RM, Blockley EW, Furner R, Guiavarc’h C, Hines A, Lea D, Martin MJ, Siddorn JR (2010) Forecasting the ocean state using NEMO: the new FOAM system. J Oper Oceanogr 3:3–15 Taylor KE (2001) Summarizing multiple aspects of model performance in a single diagram. J Geogr Res 106:7183–7192
Chapter 23
Performance of Ocean Forecasting Systems—Intercomparison Projects Fabrice Hernandez
Abstract╇ Ocean modelling, and more recently, ocean reanalysis or ocean forecasting system perform scientific assessment in order to evaluate errors and accuracy, but also to identify main drawbacks and possible improvements. Intercomparison has been a way to achieve such assessment among several numerical experiments. It is also a more robust approach for ocean state and forecast estimations. An historical overview of ocean model validation bringing to intercomparison activities is proposed here. Intercomparison projects performed over the last two decades by the oceanographic modelling community are presented and discussed here, in terms of objectives, and methodologies. Specific aspects for model, reanalysis, or ocean forecast experiment intercomparison are then detailed. Finally a particular focus is made on intercomparison studies performed in the framework of GODAE.
23.1â•…Introduction For the past 15 years, the development of Ocean Forecasting Systems (OFS) have been focusing in providing a continuous and routinely updated description of the ocean physical parameters for the past (hindcast1 and nowcast2 products), as well as in prediction mode (forecast products). Principal physical parameters of interest 1╇ Hindcast refers in the assimilation oceanographic community to ocean estimates obtained with an assimilated run where all observations are available, usually in delayed mode and numerical simulations performed over a past period. 2╇ Nowcast refers in the assimilation oceanographic community to ocean estimates obtained with an assimilated run in real time or near-real time where all possible observations are not yet available. This is the nominal “past estimates” that are provided by operational system over the previous days before the forecast.
F. Hernandez () Mercator Océan/IRD, Parc Technologique du Canal, 8–10 rue Hermes, 31520 Ramonville St. Agne, France e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_23, ©Â€Springer Science+Business Media B.V. 2011
633
634
F. Hernandez
being description of the water masses (temperature and salinity), of the currents (in the three dimensions), of sea level, of sea-state, of near-surface properties (like mixed layer depth, fronts) and of sea-ice. Heat and momentum exchanges with the atmosphere are also interesting meteorologists. More recently, by using coupled biogeochemical models, the ocean description is extended to ecosystem parameters from low to high trophic levels. Due to the sparseness of available ocean observations and due to errors attached to numerical models, the OFS development have tried to integrate observation description together with modelling approaches using assimilation methods. OFS are thus composed of numerical models of ocean dynamics, possibly coupled with seaice dynamics models and biogeochemical models, including forcing fields, together with ocean observations collecting systems, and assimilation procedures. The performance3 of such system depends on the robustness4, accuracy5 and reliability6 of these different components. This performance is thus appreciated from a user point of view by the accuracy and usefulness of ocean products delivered routinely by OFS (hindcasts, nowcasts, forecasts) for their respective applications. The OFS developed during the past years have first considered the ocean physical description. In many countries, local initiatives started to develop regional or coastal forecasting systems. In parallel, in the framework of GODAE (Global Ocean Data Assimilation Experiment, see https://www.godae.org/), some groups and countries worked to propose basin scale, or global description of the ocean dynamics. This second kind of forecasting systems is discussed here. More specifically, are discussed here the methodology proposed to evaluate the performance of eddy-permitting to eddy-resolving systems, where diurnal cycle and ocean high frequencies are not considered. Most of these systems rely on primitive equation ocean models, where tides dynamics are usually neglected (Dombrowsky et€ al. 2009). During the recent years, these systems benefitted from an ocean observability never reached before: the satellite altimetry together with ARGO, buoys and drifter programs strongly enhanced the mesoscale description since 2002 (Clark et€al. 2009). This observability promoted the development of state-of-the-art assimilation tools, and the implementation of mature multivariate methods (Cummings et€al. 2009). The GODAE system performance can be degraded by several causes, listed for their different components in Table€23.1. The four component listed here are different fields of ocean studies that have been usually studied separately. Thus, ocean modelling, as well as assimilation developments are usually associated with vali3╇ Performance has the same meaning than the title of this chapter, and is considered here in terms of usefulness and efficiency for users of ocean products provided by the OFS. In the framework of operational oceanography validation, a more specific definition is given later in this lecture notes. 4╇ Robustness (the quality of being able to withstand stresses, pressures, or changes in procedure or circumstance) is considered here in terms of OFS capacity to provide a consistent behavior and results under similar circumstances. 5╇ Accuracy is considered here as the degree of closeness of ocean estimates provided by the OFS to its actual true value. In the framework of operational oceanography validation, a more specific definition is given later in this lecture notes. 6╇ Reliability is considered here as the ability of the OFS to perform its required functions and provide ocean estimates under stated conditions while it is routinely operated.
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
635
Table 23.1↜渀 List OFS components errors, that reduce the performance and increase ocean products errors Ocean model Numerical errors Physical parametrization and approximations (e.g., sub-grid parametrization) Explicitly not represented ocean processes (e.g., tides, diurnal cycle, surface gravity waves…) Errors on initial conditions External inputs Errors in forcing fields (atmospheric fluxes, river run-off errors) Bathymetry errors Climatology errors Boundary condition errors Observation Data accuracy level Data sparseness, aliasing effects Level of robustness of the multivariate estimation/correction Assimilation method Mismatch with the data representativeness Analysis shock Level of consistency for variational techniques (linear tangent model) in highly non-linear flows
dation studies to evaluate strength/drawbacks of new improvements. Most of the performance assessment methodologies applied in operational mode to OFS are derived from validation/evaluation techniques used separately by the research community on these components. For years, ocean modellers were solely evaluating their numerical results by (1) internal check, looking at consistency of ocean dynamics, or sensitivity studies to some parameters; and (2) external check, through the comparison of model results to reference studies or existing observations. Then intercomparison studies were scheduled, following example from the atmospheric modelling community.
23.2╅First Intercomparison Experiments The international Atmospheric Model Intercomparison Project (AMIP), in the framework of the World Climate Research Programme (WCRP) has provided a guidance for the oceanic modelling community. The aim of AMIP was to offer a comprehensive evaluation of the performance of atmospheric GCMs7 on climate and higher-frequency time-scales, and the documentation of their systematic errors. In a common modelling framework, that is, simulating the monthly variability of the atmospheric parameters for the 1990s decade, all climate modelling groups (more than 20 institutions around the world) provided their simulations in a standard way. Due to the participation of all groups in building up the assessment methodology, and the sustained reporting on evaluation of each experiment, AMIP has become the reference for atmospheric and climate performance assessment. An overview of AMIP is given in (Gates 1992). 7╇
GCM: Global Circulation Model.
636
F. Hernandez
Free coupled ocean/atmosphere numerical simulations are compared (usually monthly averaged parameters) against data (averaged in the same way), climatologies, or existing reference simulations. In particular ECMWF8, NCEP9 or COADS10 reanalysis, considered to be more realistic due to assimilation benefit. An ensemble approach was adopted. First each numerical simulation was individually evaluated (RMS11, correlation against the compared references). Then the ensemble mean, and its standard deviation were also evaluated. Ensemble approaches expect that individual simulations will present errors that are not correlated. In practice, this is not obviously true, if simulations are based on similar ocean models, similar forcings, etc…. However, by multiplying the number of simulations in the intercomparison, AMIP objective was clearly to get “not correlated” simulations. Such approach is presented in Fig.€ 23.1, taken from (Stammer et€ al. 2009)12 in the framework of the CLIVAR’s GSOP13. Models biased similarly bring to ensemble estimates also biased. With the era of model’s eddy-permitting capacity, different ocean modelling groups started to organize intercomparison experiment. The US-German Community Modelling Effort (CME), in support of the World Ocean Circulation Experiment (WOCE) started to infer model parametrization and sensitivity studies in modelling the North Atlantic basin (for a review, see Böning and Bryan 1996). The circulation and the eddy field were described in a limited way. Several causes were identified, among them boundary conditions, the representation of water exchanges and topographic controlled flows, overturning circulation and vertical mixing…. This experiment have been followed by the DYNAMO project, dedicated to offer intercomparison among three classes of ocean models of the North Atlantic Ocean in a similar numerical experiment framework (Meincke et€al. 2001). Forced identically, and configured over the same domain, a z-level, sigma-level and isopycnal vertical discretisation primitive equation models have been run in similar ways. The objective was to identify patterns of the North Atlantic Ocean circulation that were robust, and others that were sensitive to model parametrisation. Thus, one objective aimed to increase our knowledge of the Atlantic Ocean dynamics, the second was to improve ocean models, and share expertise among different modelling groups. Simulations were eddy-permitting (1/3° horizontal resolution). As far as possible, model parametrisations (i.e., lateral and vertical mixing, bottom friction, mixed layer turbulence, bathymetry, boundary conditions) were tuned to be similar, and initial conditions were provided by the Levitus climatology (for details, see Willebrand et€al. 2001). After a spin-up of 15 years, the last 5 years of the monthly-mean European Centre for Medium-Range Weather Forecasts. United States National Centers for Environmental Prediction. 10╇ Comprehensive Ocean-Atmosphere Data Set. 11╇ RMS: root mean square. 12╇ This OceanObs’09 community white paper is available at http://www.oceanobs09.net/cwp/index.php. 13╇ Global Synthesis and Observations Panel, (http://www.clivar.org/organization/gsop/synthesis/ synthesis.php). ╇ 8╇ ╇ 9╇
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
637
Fig. 23.1↜渀 From Fig.€8 of (Stammer et€al. 2009): evaluating model quantity from multi-ensemble of results. The arrows illustrate the general expectance that assimilation of observations moves the results closer to the truth. The left panel show the ideal situation in which the ensemble spread and the distance to the ensemble mean provide useful measures while the right panel illustrates a biased case that is more realistic for the ensemble of present day synthesis
climatological forcing have been analysed following a protocol still considered nowadays for “consistency assessment” (explained later in this chapter): • Analysis of the meridional overturning circulation, that reflects the thermohaline circulation (mean annual values). Differences were analysed in term of deep flow and outflow/overflow representations, as well as diapycnal mixing effects. • Analysis of the overturning transport at 25°N. that also reflects the thermohaline circulation (mean annual values). Seasonal variation were also assessed in a specific study (Böning et€al. 2001). At this latitude, the western boundary current as well as the return circulations of the subtropical gyre are captured. Note that a particular effort has been put by the international community in order to have a sustained observation network of the flow across the Atlantic at that latitude. The RAPID array is a sustained program that provides data since 200414 (Cunningham et€al. 2007). The RAPID array uses standard observational techniques—moored instruments that measure conductivity, temperature and pressure, as well as bottom pressure recorders—to measure density and pressure gradients across the North Atlantic, from which one can readily calculate the basin overturning circulation and heat transport.
14╇
638
F. Hernandez
Meridional heat transport [PW]
1.5
1
0.5
0 20S
10S
EQ
10N
20N
30N
40N
50N
60N
70N
Fig. 23.2↜渀 Meridional heat transport in the North Atlantic Ocean from DYNAMO intercomparison (↜full line = LEVEL / dashed = ISOPYCNIC / dotted = SIGMA / dash-dotted = SIGMA-2). Values and errors bars given by Macdonald and Wunsch (1996). (Taken from Fig.€9 of Willebrand et€al. 2001)
• Analysis of the mean meridional heat transport, that reflects heat flux exchanges in a climatological aspect. Figure€23.2, taken from (Willebrand et€al. 2001) shows that models are underestimating the transport south of 20°N, compared to hydrographic data analysis, and that level and sigma models look less efficient in representing the transport in the subtropical gyre., due to their weaker Meridional Overturning Circulation (MOC) representation. • Analysis of mean surface circulation, associated with the mean geostrophic flow. Current at the surface and different depth are studied as well as vertically integrated transport. The Gulf Stream (transport across Florida Strait, separation at Cape Hatteras, the North West Corner flow), the North Atlantic Current, and the Azores Current representation are particularly discussed for the subtropical gyre (New et€al. 2001b). A dedicated study was performed for tropical currents in the western basin (South Atlantic Current, North Brazil Current, retroflection and North Atlantic Counter Current, eddies propagating into the Carribean current system) for the mean, and seasonal variations (Barnier et€al. 2001).
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
639
• Analysis of the eddy field and its variability, associated with baroclinic and barotropic instabilities. The sea surface variability, as well as the eddy kinetic energy can be compared to satellite altimetry equivalent values (Stammer et€al. 2001). • Analysis of circulation at depth: pathways of the Mediterranean Waters, that impact the thermohaline circulation in the North Atlantic Ocean (New et€al. 2001a).
23.3â•…Evaluation and Intercomparison of Ocean Reanalysis With the availability of satellite altimetry in near real-time since the launch of ERS-1 (1991) and TOPEX/Poseidon (1992), assimilation techniques have been developed in order to provide more realistic descriptions of the ocean dynamics with ocean models. A first approach is to carry out reanalysis experiments, where models and assimilation are tuned to provide in the past the best description of the ocean circulation. Usually, the set of selected observations for assimilation is processed to remove possible biases and take into account differences in different types of observations. The set of forcing fields is also prepared in order to minimize errors, and long-term trend effects. Forcing estimates might merge observed and modelled parameters. During the experiment, successive intermediate runs might be performed in order to reduce errors identified in the meantime. And because state-of-the art ocean models are used, ocean reanalysis offer the most accurate description of the ocean for a set of “components” (i.e., choice of model and configuration, choice of observations and assimilation methodology). In fact, historically in the ocean community, the intercomparison of ocean reanalysis have been the first where objective was to be compared to the ocean truth. In the framework of GODAE and CLIVAR, the GSOP project aimed to intercompare different reanalysis computed over one to several decades (Fig.€23.3). One of the goal being to offer synthesis on ocean state estimation for climate research (Lee et€al. 2009a, b; Stammer et€al. 2009). The idea being that multi-model ensemble approaches can be useful to obtain better estimates of the ocean. In practice, the GSOP objectives are (1) to assess the consistency of the synthesis through intercomparison; (2) to evaluate the accuracy of the products, possibly by comparison to observations; (3) to estimate uncertainties; (4) to identify areas where improvements are needed; (5) to evaluate the lack of data that directly impacts the synthesis, and propose future observational requirements; (6) to work on new approaches, like coupled data assimilation. Another use of ocean reanalysis is to provide initial conditions for seasonal and climate forecasts. This is a much more “close-to-real-time-operation” application. The idea is to offer for present time, or for few weeks before, the best possible ocean description together with its error estimates, in order to start coupled ocean/atmosphere forecast for seasonal prediction (Balmaseda et€al. 2009).
640
F. Hernandez
z-Level Model
No Model
NCEP
ERA40
HOPE
Relax. Bias CORE corr. E-P.
Relax.
EN3
POP
OPA/NEMO
DePreSys
ECMWF
INGV
Mercator/ CERFACS
URDG
MOM
QSCAT GPCP
SODA
MIT
Relax. Relax.
GFDL
GODAS
K-7
GECCO
.25°x.25° 4D-Var
3D-Var/OI 1°x1° 2°x2°
DATA
Fig. 23.3↜渀 From (Stammer et€ al. 2009), Fig.€ 1, summarizing reanalyses taken into account by GSOP, sorted by forcing fields (↜green), type of ocean models (↜orange), assimilation methodology (↜pink), and résolutions (↜different blue)
For these two uses of reanalysis in ocean synthesis, errors listed in Table€23.1 are still relevant. One of the conclusions of the GSOP is that the full use of multiensemble assessment requires the detailed error information not only about data and models, but also about the estimated states. Figure€23.1 is illustrating that ocean estimates tend to cluster around methodologies and may not be independent from each other (see discussion in Stammer et€al. 2009). An important aspect of reanalysis accuracy, and the way intercomparison has to focus on, is their dependence on data to be assimilated in the past. Many ocean reanalysis are starting during the 1950s, when atmospheric reanalysis (NCEP and ECMWF ERA40) are available. Until 1978—first satellite with radiometer that provided Sea Surface Temperature (SST) with a global coverage—reanalysis can only rely on in-situ observations that are clearly under-sampling the ocean. As mentioned above, the ocean observability was strengthened with satellite altimetry in the 1990s. And since 2002, the ARGO array changed radically the ocean interior observability (e.g., Roemmich and Argo-Science-Team 2009). Note that atmospheric forcing accuracy has also been improved with satellite observations (radiometer for SST, heat content and exchanges in the atmosphere, and scatterometer for wind estimates). This lack of data in the past makes difficult any rigorous analysis of the ocean interannual and decadal variability.
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
641
Another accuracy aspect is linked with multi-data assimilation approaches. Nowadays, most of the assimilation methods use multivariate scheme that corrects their background15 fields using information from temperature, salinity profiles, altimeter sea level measurements, SST from satellite and in-situ observations. Some also take into account satellite sea-ice data, satellite gradiometry, current measurements deduced from current meters or drifters etc…. In these assimilation schemes, every observation is impacting the model parameters. For instance, temperature observation should correct the salinity field, but also the sea-level, and reversely. Which means accuracy and intercomparison assessments have to considered carefully the relations between the corrected ocean parameters, and the observation errors in the framework of each forecasting system. Moreover, “representativeness” of data has to be taken into account in the assimilation scheme. For instance, coarse resolution models (e.g. 2° horizontal resolution), can clearly not reproduce ocean fronts and water mass distribution as observed by gliders on scale of few kilometres. The GSOP activity highlights most of these difficulties. A large number of studies have been performed using the reanalysis, among them, sea level variability, water mass pathways, variability of upper and mixed layer heat content, surface flux and run-off estimations, biogeochemistry, geodesy (see Lee et€al. 2009a for more details). Note that most of the topics are similar to those studied with free simulations (as mentioned above). In particular the MOC, corresponding to the regulation of the meridional heat transport that affects climate variability was subject to several analysis. Figure€23.4 provides a synthesis for the North Atlantic meridional heat transport. One can notice, compared to Fig.€23.2 that some reanalysis provide more accurate estimates compared to hydrography (Ganachaud and Wunsch 2000) in the subtropical gyre. It means that since the DYNAMO project, models, associated with data assimilation succeeded in improving the representation of the ocean general circulation. However, the spreading of the six estimates in Fig.€ 23.4 are larger than error bars from (Ganachaud and Wunsch 2000). Moreover, the four reanalysis based on ECCO are similarly below the reference, showing here correlated errors in ECCO systems that will strongly affect an ensemble mean. Figure€23.5 illustrates the difficulties in providing a robust evaluation of upper ocean heat content over more than 50 years. As mentioned earlier, the spreading before the 1970s seems associated with a lack of in-situ data. The ensemble standard deviation is reduced in the 1990s. However, since 2000, spreading appears again. This clearly raises the question of outliers with respect to the mean. Here, independent estimates should be used in order to evaluate reanalysis error levels. However, one can note a general tendency from all the time series: there is a clear warming of the upper ocean since the 1990s. The GSOP effort will continue in the future. Multi-model assessment and ensemble mean approach has been identified as the only way to provide reliable In the framework of assimilation the background is the state of the ocean model prior any correction by the assimilation method.
15╇
642
F. Hernandez
1.60
Heat transport (PW)
1.20
0.80
0.40
0.00 ECCO-SIO ECCO-JPL GFDL
–0.40 20°S
0°
20°N
40°N
ECCO-50y ECCO-GODAE INGV 60°N
80°N
Fig. 23.4↜渀 North Atlantic meridional heat transport, from Armin Koehl GSOP présentation at the CLIVAR/GODAE meeting on ocean synthesis evaluation, held at ECMWF, UK, in August 2006 (http://www.clivar.org/data/synthesis/intercomparison.php). (Point and error bars correspond to estimates from Ganachaud and Wunsch 2000)
ocean estimates. Which means that (a) intercomparison will still be used to evaluate discrepancies, and (b) that effort is needed to characterize uncertainties from each system. Data assimilation techniques should provide more robust control on analysis16 and innovations17. In parallel, the ocean model community is still working on improvements (see Griffies et€al. 2009 for a review). Moreover, work is still needed in order to reduce biases and make consistent historical dataset, but also clearly measure the impact of data type and availability on uncertainties (Heimbach et€al. 2009). The scientific assessment of these reanalysis will follow in a similar way. The main goal is still to characterise and understand the ocean medium and large scale patterns prior any further analysis. It means that the same ocean estimates analysed during CME or DYNAMO experiments will be evaluated first.
Here in the assimilation framework, the analysis is the production of an accurate image of the true state of the ocean at a given time, represented in a model as a collection of numbers. An analysis can be useful in itself as a comprehensive and self-consistent diagnostic of the ocean. It can also be used as input data to another operation, notably as the initial state for a numerical ocean forecast, or as a data retrieval to be used as a pseudo-observation. 17╇ The innovation is the discrepancies between observations and ocean model state, that is the vector of departures at the observation points. 16╇
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
643
12m-rm seasonal anom: NATL Averaged temperature over the top 300m 0.6
0.4
ukdp ukoi cfcs2 cfas2 ecco50y
gfdl soda ecmfa ecmfc ukgs
ingv mrieccoSIO cfasa mct2
mct3 eccoJPLa eccoJPLc eccoMIT GMAO
0.2
0.0
–0.2
–0.4
–0.6
sdv ensm = 0.164 s/n ensm = 1.620 –0.8 1950
1960
1970
sdv all = 0.206 s/n all = 2.028
1980
spread = 0.101
1990
2000
Time
Fig. 23.5↜渀 Seasonal anomalies of integrated [0–300€m] température in the North Atlantic Ocean. Figure from Balmeseda and Weaver, GSOP présentation at the CLIVAR/GODAE meeting on ocean synthesis evaluation, held at ECMWF, UK, in August 2006 (http://www.clivar.org/data/ synthesis/intercomparison.php). Color code are indicated for each reanalysis. Gray shaded area correspond to ensemble mean standard déviation
23.4╅Intercomparison and Evaluation of Operational Ocean Forecasting System 23.4.1 D evelopment of Operational Ocean Forecasting System Evaluation The second use of data assimilation with ocean modelling has been dedicated to short terms ocean prediction. Operational oceanographic centres development is also related to the availability of satellite data. In the late 1990s, several groups had already proposed multivariate assimilation scheme enhancing ocean models capabilities, either based on quasi-geostrophic or primitive equation formulations (see Dombrowsky et€al. 2009 for a quick historical introduction). In the framework of GODAE, the main development of these groups focused on OFS providing daily estimates of hindcast, nowcast and short-term forecast18 of the ocean dynamics at 18╇
Short-term ocean prediction: between 5 days and 2 weeks.
644
F. Hernandez
mesoscale. That is, a description at length and time scales larger than 10€km and one day of the density field and water mass changes, of the currents (from surface Ekman currents to western boundary currents), and their respective transient effects in term of front, meanders, waves and eddy-like propagating features, from surface to depth. The objectives and potential applications of such OFS have been largely discussed with the terms of reference of GODAE (see Bell et€ al. 2009 for more details and references). However, one can mention ocean circulation description for synoptic to interannual studies, short term prediction for security (e.g. oil spill prediction, search and rescue activities), for water quality (by coupling with biogeochemical models, like algae bloom detection), for defence application (usually associated with acoustic modelling), or for fish stock assessment when coupled with efficient ecosystem and high trophic levels models. Evaluation methodology of OFS has first followed the path proposed by the modelling community, but had to take into account constraints that do not normally appear when performing model validation on academic project. First, by proceeding to the evaluation of the assimilation scheme, and its efficiency in providing accurate ocean analysis19. That is, more focused on accuracy than overall quality. In other words, where a certain level of quality is sought in pure modelling research (e.g., is there deep convection and Labrador Sea Water formed? a Gulf Stream overshoot? an acceptable meridional heat transport and Meridional Overturning Circulation?), assimilation experiments are tested on “realistic representation” where reference dataset are used to directly quantify error levels. A comprehensive error budget is also required for data assimilation results to be properly assessed. Assimilation schemes are more or less guided by background20 and observation errors, and the most sophisticated schemes provide robust analysis21 and forecast error estimates (Brasseur 2006; Cummings et€al. 2009). It is then required to verify the model error assumptions against dedicated error validation procedures. Second, by taking into account and measuring the impact of real-time constraints. That is, lack of data (observations that are not yet available during the assimilation time-window), and/or the low quality of these data, compared to reanalysis framework, where data are usually complete and fully controlled and corrected. Note also that in real time operations, forcing fields provided by weather forecast or atmospheric models might be less precise. And third, by focusing more specifically to the scientific assessment of forecast products, that is the evaluation of the performance and the predictability of the OFS. Performance is considered here not in its general definition, but more precisely associated with the benefit of using an ocean predicting model, together with an assimilation methodology that correct the ocean estimates produced by the OFS. Here the performance is a value of the usefulness of these different components for user’s interest and applications: to predict ocean current for the next week, why just not use a climatology? Why not applying a persistence approach, saying that See footnote 16. See footnote 15. 21╇ See footnote 16. 19╇ 20╇
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
645
the ocean state next week, in a good approximation, is the same that the estimation computed today? In both cases, what is the added value of sophisticated tools like assimilation scheme and ocean models compared to climatology, to persistency approach? In practice, the idea is to evaluate forecast errors, compared to climatology or persistence errors, together with accuracy of the analysis (i.e., the efficiency of the assimilation scheme). Constraints appear also on technical/engineering aspects. Assessments have to be performed in real time, matching practical operational constraints, such as computer resource, storage capacity, and availability of reference values. Which means that the dataflow has to be monitored, since lack of input data for any technical reasons will directly impact the ocean estimates quality. Also, outputs from operational systems might be used for user-oriented applications (e.g., water quality, marine security or other societal use). Thus, the performance assessment methodology mentioned above has to rely on user requirements. Different applications might require different levels of accuracy. For instance, the accuracy of surface current forecasts dedicated to help search and rescue activities could not be matched by the operational systems, while the same ocean model may be used satisfactorily for a more general ocean study, or the climatology could be useful enough for some applications (e.g., tourist brochure). Thus, for all these reasons, using different model configurations and data assimilation methods, operational oceanography teams have tried to develop their tools for assessing the quality of outputs, in order to be able to provide “error bars” to users. Thanks to GODAE, these initiatives could be shared at the international level. An outlook of OFS validation is covered by the lecture of Martin (2011) during summer school. In this context, a common interest for intercomparison or collaboration on validation methods soon appeared among the different groups developing OFS. In the framework of the MERSEA Strand1 European Union (EU) project (2003– 2004), a first attempt was realized to intercompare eddy-permitting, basin scale ocean data assimilating systems. Hindcasts originating from the different systems were intercompared using climatology and historical high quality ocean datasets, like WOCE sections (Crosnier et€al. 2006). This validation methodology has been enhanced during the EU MERSEA Integrated Project (2004–2008, see http:// www.mersea.eu.org) on several aspects: (1) Perform routinely the validation, and thereby stimulate data processing and archiving centers to provide observations in real time; (2) Apply diagnostics that offer a robust scientific evaluation of each system, and select the most suitable diagnostics among those applied in research mode; (3) Evaluate both operational system performance and the products quality, taking into account user requirements (usually from short term to seasonal timescale applications); (4) Push for consistency of assessment among the different forecasting centres: applying similar diagnostics to the different systems, thus strengthening the overall assessment management activity through central team expertise; (5) Use this consistency to allow intercomparison of the operational systems, and thus design and implement a technical architecture that allows robust exchanges, interconnections, and interoperability between these systems. Which
646
F. Hernandez
is a milestone for implementing, in a consistent way, interoperable activities like ensemble forecasting. In the framework of GODAE, based on these advances for OFS scientific assessment, a special intercomparison exercise was decided, prepared, and carried out at the beginning of 2008 (Hernandez et€al. 2009). Some of the results are highlighted below.
23.4.2 Validation and Intercomparison Methodology The assessment methodology used ultimately for the GODAE intercomparison project is a direct heritage of the validation activity performed earlier in the framework of operational oceanography projects. It is based on two aspects (Crosnier and Le Provost 2007). First, «the philosophy»: a set of basic principles to assess the quality of OFS products/systems through a collaborative partnership: • Consistency: verifying that the system outputs are consistent with the current knowledge of the ocean circulation and climatologies. • Quality (or accuracy of the hindcast/nowcast): quantifying the differences between the system “best results” (analysis) and the sea truth, as estimated from observations, preferably using independent observations (not assimilated). • Performance (or accuracy of the forecast): quantifying the short term forecast capacity of each system, i.e. Answering the questions “does the forecasting system perform better than persistence and better than climatology?” • Benefit: end-user assessment of which quality level has to be reached before the products are useful for an application. Second, «the methodology»: a set of sharable tools for computing diagnostics, and a set of sharable standards to refer to, for assessing the products quality. Both tools and standards should be subject to upgrades and improvements in an operational framework. This methodology has been built using “metrics”: mathematical tools that compute scalar measures from systems outputs, compared to “references” (climatology, observations etc…). The metrics provide equivalent quantities extracted out of the different systems for the same geographic locations. Applied on different forecasting systems, they provide homogeneous and consistent sets of quantities that can be compared without depending to the specific configuration of each OFS (horizontal resolution, vertical discretization etc…). “Share-ability” is mandatory and allows each forecasting center to perform intercomparison and validation independently, using results from other centres. Metrics, are computed in a standardized way, the NetCDF file format using the COARDSCF convention is used, allowing time aggregation, easy and flexible manipulation, and self consistent meta-data representation. Distribution relies on internet communication protocols, basically through FTP. However, more user-friendly communication technologies based on OPeNDAP servers that can be visualized through a Live Access Server (LAS), through Dynamic Quick View portals or with similar
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
647
Fig. 23.6↜渀 Summary of Class 2/3 metrics. All existing and available moorings, tide-gauges, XBT lines, WOCE/CLIVAR lines and others have been selected in order to define virtual sections and mooring points implemented in ocean models
clients that have now been widely adopted (Blower et€al. 2008). In practice, these technologies allow each forecasting centre to compute a considerable amount of diagnostics stored on the local servers of other centres. The total set of validation data do not need to be centralized requiring large storage capacities. Instead, for a given diagnostic, one can specifically gather the information spread across the different centres. Metrics are defined in four types, or “classes” (see Hernandez et€al. 2008 for more details): • Class 1 metrics, i.e. 3D standardized grids of temperature, salinity, currents, mixed layer depth, sea ice quantities and fluxes, can be directly compared to climatologies, but also at the surface to satellite observations (e.g., SLA, SST, or ice concentration). By using similar Class 1 grids, several OFS can intercompare their ocean estimates with a given reference dataset (example is provided in Fig.€23.8 in next section). • Class 2 metrics (virtual moorings and sections) are designed to match location of existing in-situ datasets as shown in Fig.€23.6. Then each time observations are provided (e.g., an XBT sections from a merchant ship), the Class 2 diagnostic can be performed routinely, and the model variable can be compared to “ground truth”. Figure€23.7 illustrates the use of Class 2 diagnostic for intercomparison between five systems in the Gulf of Cadiz. Compared to older WOCE hydrographic transects, it also allows a consistency assessment. Finally, it helps to assess improvements from two generation of Mercator systems. • Class 3 metrics concern derived quantities, like ocean transport, heat content, thermohaline circulation.
648
F. Hernandez
Fig. 23.7↜渀 Intercomparison of several ocean forecasting systems (↜Mercator1, TOPAZ, FOAM, HYCOM) during the European Project MERSEA Strand1 through a Class 2 salinity section averaged in September 2003 in the Gulf of Cadiz. Two WOCE lines are used as reference dataset. Further comparison was carried on when a new version of the Mercator system was developed (↜Mercator2)
• But to get closer to data, both for hindcasts and forecasts, Class 4 metrics were designed to build up a dataset of “model values equivalent to observations” for all OFS outputs: hindcast, nowcast and forecast. Thus, forecasting skill of OFS can be objectively evaluated. Class 4 diagnostics have been implemented in several centres for temperature, salinity (observations from Coriolis in-situ data centre), sea-ice concentration (maps from OSI-SAF22), sea level (satellite altimetry from AVISO23) and currents (from the Global Drifter Program). For all these diagnostics, a particular attention is paid to use independent observations, i.e., preferably not assimilated. Ideally, instead of satellite altimetry assimilated in most OFS, tide gauge data for sea level, or drifter or ADCP24 velocities for current. Table€23.2 summarizes the list of ocean/sea-ice parameters that can be evaluated with Class 4, and the corresponding data set. See Ocean & Sea Ice Satellite Application Facility at http://www.osi-saf.org/. See http://www.aviso.oceanobs.com/. 24╇ Acoustic Dopler Current Profiler. 22╇ 23╇
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
649
Table 23.2↜渀 Ocean and sea-ice physical quantities, and corresponding available observations for validation in real time (RT) or delayed mode (DM) Data type Measurement In-situ temperature CTD (DM), XBT (RT), buoy (RT), mooring (RT/DM), TSG (DM), deep float (RT), glider (RT/DM) In-situ salinity CTD (DM), XCTD (DM), buoy (RT/DM), mooring (RT/ DM), TSG (DM), deep float (RT), glider (RT/DM) Sea surface temperature Satellite radiometer/radar (RT), TSG (DM), buoy (RT), mooring (RT/DM) Sea surface salinity TSG (DM), buoy (RT), mooring (RT/DM) [SMOS, Aquarius] (RT expected) Horizontal currents Drifters (RT), Current meter (DM), ADCP (DM) Satellite altimeter (RT), SAR (DM), High Frequency radar (DM), derived from SST (DM), derived from deep float displacement (DM) Sea level Tide gauges (RT), satellite altimeter (RT), GPS (to be tested) Ocean colour Satellite imagery (RT/DM) Sea Ice concentration, drift Satellite (RT) CTD conductivity temperature depth, XBT expendable bathythermograph, TSG thermosalinograph, XCTD expendable conductivity temperature depth
From Class 1, 2 and 3 metrics, the consistency and quality of each system could be deduced, or intercompared. For instance, daily section of operational run can be routinely compared to Class 2 historical section as illustrated in Fig.€23.7: in this case, the “general good looking” of the water masses distribution is verified against two historical WOCE lines: e.g., one expect that salinity signature of the Mediterranean waters appears at the proper depth. A system’s performance can be addressed using Class 4 metrics. The “benefit” could also be addressed using a set of Class 1, 2, 3 and 4 metrics. However, new “user-oriented” metrics might need to be defined to fully address this.
23.4.3 The GODAE Intercomparison Project Recently the GODAE Intercomparison Project has allowed to intercompare and perform accuracy and consistency assessment. The objectives of the project were to (a) demonstrate GODAE operational systems in operations; (b) share expertise and design validation tools and metrics endorsed by all GODAE operational centers; (c) evaluate the overall scientific quality of the different GODAE operational systems (results are summarized in Hernandez et€al. 2009). This project involved the majority of operational centres worldwide delivering daily ocean products, such as: BLUElink (Australia), HYCOM (USA), MOVE/ MRI.COM (Japan), Mercator (France), FOAM (United Kingdom), C-NOOFS (Canada), and TOPAZ (Norway) systems (Dombrowsky et€al. 2009; Hurlburt et€al. 2009). It provides a diversity of ocean models -4 types; global, or regional; based on
650
F. Hernandez
different vertical discretizations; eddy-permitting to eddy-resolving; coupled or not with sea-ice models; using different types of air-sea flux modelling. It provides also some diversity of assimilation techniques, using not the same kind of observations; proceeding to weekly or daily analysis or updates; based on sequential or variational approaches; based on single or ensemble analysis and predictions; applying or not “close to data” schemes like First Guess At Appropriate Time (FGAT) and Incremental Analysis Update (IAU) techniques (Bloom et€al. 1996). It was initially decided to analyse similarly the operational outputs of the different OFS involved. February, March and April 2008 was the selected period. In practice, all output could not be provided in real-time, and the scientific evaluation has been performed with some month delay. A series of observations and reference dataset have been used to assess the accuracy and consistency of the ocean products, using Class 1 and Class 2: • Weekly maps of Sea Surface Height (SSH) or Sea Level Anomalies (SLA) from AVISO satellite altimetry25. • Weekly maps of surface currents derived from satellite altimetry (Larnicol et€al. 2006). • The Levitus WOA 2005 climatology (Antonov et€al. 2006; Locarnini et€al. 2006). • The Mixed Layer Depth climatology (D’Ortenzio et€al. 2005; de Boyer Montégut et€al. 2004, 2007). • Daily sea-ice concentration from satellite from OSI-SAF26. • OSTIA GHRSST SST products (Donlon et€al. 2009). • In-situ temperature and salinity, provided by CORIOLIS. In practice, all groups could contribute to the intercomparison. Specific studies were carried out: in the north, the south and the tropical Atlantic, the western north Pacific, the tropical Pacific, and the Indonesian Seas. All group had access to all output and reference dataset. SST consistency and accuracy was verified against OSTIA maps. Water mass consistency was evaluated using the WOA 2005 climatology. Mixed Layer Depth consistency was verified with the climatology. Sea level and mean circulation were assessed against satellite altimetric maps. Mean and eddy kinetic energy were compared at the surface with SURCOUF maps (Larnicol et€al. 2006). Three months are rather short to infer the circulation patterns analysed in DYNAMO or reanalysis project. However, the consistency assessment allowed verifying if the “mean” circulation was like expected. For instance, in Fig.€23.8 the current analysis in the North Atlantic showed us the consistency of the subtropical and subpolar gyre circulation. One can note that the Azores Current appears for some systems, that the Gulf Stream extension does not spread similarly, or that the Labrador and the East-Greenland current are more or less intense. The use SURCOUF data allowed to status on the quality of the different outputs: eddy kinetic energy can be computed, and accuracy numbers given. However the high resolution systems 25╇ 26╇
See footnote 23. See footnote 22.
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
651
Fig. 23.8↜渀 From (Hernandez et€al. 2008). Averaged eddy kinetic energy (m2/s2) from February to April 2008 for: TOPAZ (↜top-left), HYCOM (↜middle-left), C-NOOFS (↜middle-bottom-left), FOAM (↜top-right), Mercator High-res (↜middle-right), Mercator global (↜middel-bottom-right) and the observed SURCOUF product (↜bottom-left). Bottom-right: Time series of eddy kinetic energy box averaged in limited area around the Gulf Stream (80–60°W and 30–42°N)
652
F. Hernandez
seem to provide more energetic features than SURCOUF. Here we can suspect that SURCOUF currents are smoother than HYCOM. Which indicate than reference dataset have also to be taken with caution. Similar limitation appeared using OSTIA SST: OSTIA map can be dubious when satellite data are lacking. Thanks to OSTIA error estimates provided together with SST values, the intercomparison could focus on “valuable” areas. The full overview of this first intercomparison experiment is given in (Hernandez et€al. 2008, 2009). This first international intercomparison of OFS was limited to a short period, and a short set of ocean parameters. Impact of forcing field was not studied, neither the time-varying aspect of ocean features (eddy propagation, waves…) or sea-ice. The analysis was also limited to hindcast: forecast and performance metrics could only be assessed in a limited way. This initiative should carry on in the framework of the GODAE Ocean View project. More reference dataset will be made available in real-time soon, and the methodology, the metrics, are now adopted by most groups. This experiment has shown that intercomparison and evaluation of OFS could be performed in any part of the ocean. The three-months limited period could address the consistency, and accuracy of OFS for this season. The performance of the system, with regard of their particularities (resolution, model approximations, assimilation method…) started to be evidenced. Next step would be to carry on the intercomparison in term of multi-model ensemble assessment.
23.4.4 User Oriented Validation As mentioned earlier, most of the validation methodology proposed for ocean models and OFS is based on the “oceanographer point of view”. That is, evaluation of the large scale circulation, and smaller scale features in a general sense. Even if accuracy number and error bars can be produced by this approach, they might not fully satisfy some users. For instance, a merchant ship captain may not be satisfied with a daily averaged map of sea-ice concentration, instead, he might prefer a map of ice-edge and position of ice-extent, with the probability of ice-drift for the next day. Many examples could be mentioned, particularly concerning coupled physical/biogeochemical parameters that impact ecosystem behaviour, or coastal applications (e.g., De Mey et€al. 2009). Oil spill prediction has been one of the applications particularly studied. Major disaster pushed authorities to develop oil spill models. They were first driven by wind and waves effects. With the availability of ocean current forecast, new oil spill models have been developed. In the framework of MERSEA, simulated experiment at sea, together with oil spill modeling have been carried out. Intercomparison has been a key point: oil spill predictions were performed using different OFS current. It allows to check the robustness of the predictions, and ensemble forecast analysis was performed (see Hackett et€al. 2009 for a review). Similar studies were achieved for search and rescue drift-prediction models.
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
653
23.5â•…Conclusion Due to easier exchange of data and numerical experiment results worldwide, the growing community of ocean modelling or OFS is more subject to mutual validation and collaborative work. Moreover, model and forecast evaluation benefit now from a shared methodology endorsed by the GODAE community. It is a first step toward the assessment approach implemented in the Numerical Weather Prediction community. More operational validation tools are planned to be implemented in the framework of the European MyOcean projects, and intercomparisons activities will be carried in GODAE Ocean View. Note also that validation needs the limited number of existing ocean data for accuracy assessment. Hence, more groups apply similar techniques, and tend to work together. Intercomparisons of ocean models and forecasts is thus coming a standard approach. However, there are still specific validation aspects with respect to academic studies, reanalysis evaluation, or ocean forecast performance assessment, corresponding to each specific framework. For instance, ocean forecasting systems have to deal more particularly with data availability and quality. The validation approach presented here, and proposed by the open ocean community, is slowly extended to the coastal and the biogeochemical modelling communities. Note that the ocean observing community, dealing with the need to infer the impact of future observation system is soliciting the OFS for impact studies where the validation methodology is used to assess the performance of the simulated networks.
References Antonov JI, Locarnini RA, Boyer TP, Mishonov AV, Garcia HE (2006) World ocean atlas 2005. In: Levitus S (ed) Salinity, vol€2. U.S. Government Printing Office, Washington, p€182 Balmaseda MA, Alves O, Arribas A, Awaji T, Behringer DW, Ferry N, Fujii Y, Lee T, Rienecker M, Rosati A, Stammer D (2009) Ocean initialization for seasonal forecasts. Oceanogr Mag 22:154–159 Barnier B, Reynaud T, Beckmann A, Böning CW, Molines J-M, Barnard S, Jia Y (2001) On the seasonal variability and eddies in the North Brazil current: insights from model intercomparison experiments. Progr Oceanogr 48:195–230 Bell MJ, Lefebvre M, Le Traon P-Y, Smith N, Wilmer-Becker K (2009) GODAE, the global ocean data experiment. Oceanogr Mag 22:14–21 Bloom SC, Takacs LL, da Silva AM, Ledvina D (1996) Data assimilation using incremental analysis updates. Mon Weather Rev 124:1256–1271 Blower JD, Blanc F, Cornillon P, Hankin SC, Loubrieu T (2008) Underpinning technologies for oceanography data sharing, visualization and analysis: review and future outlook. Final GODAE Symposium 2008: the revolution in global ocean forecasting GODAE: 10 years of achievement. Nice, France, GODAE, pp€301–310 Böning CW, Bryan FO (1996) Large-scale transport processes in high-resolution circulation models. In: Krauss W (ed) The warmwatersphere of the North Atlantic Ocean. Gebrüder Borntraeger, Berlin, pp€91–128
654
F. Hernandez
Böning CW, Dieterich C, Barnier B, Yanli J (2001) Seasonal cycle of meridional heat transport in the subtropical North Atlantic: a model intercomparison in relation to observations near 25°N. Progr Oceanogr 48:231–253 Brasseur P (2006) Ocean data assimilation using sequential methods based on Kalman filter. In: Chassignet EP, Verron J (eds) GODAE Summer school in ocean weather forecasting: an integrated view of oceanography. Springer, Dordrecht, pp€371–316 Clark C, Wilson S, Benveniste J, Bonekamp H, Drinkwater MR, Fellous J-L, Gohil BS, Lindstrom E, Mingsen L, Nakagawa K, Parisot F, Roemmich D, Johnson M, Meldrum D, Ball G, Merrifield M, McPhaden MJ, Freeland HJ, Goni GJ, Weller P, Send U, Hood M (2009) An overview of observing system relevant to GODAE. Oceanogr Mag 22:22–33 Crosnier L, Le Provost C (2007) Inter-comparing five forecast operational systems in the North Atlantic and Mediterranean basins: the MERSEA-strand1 methodology. J Mar Syst 65:354–375 Crosnier L, Le Provost C, MERSEA Strand1 team (2006) Internal metrics definition for operational forecast systems inter-comparison: examples in the North Atlantic and Mediterranean Sea. In: Chassignet EP, Verron J (eds) GODAE summer school in ocean weather forecasting: an integrated view of oceanography. Springer, Dordrecht, pp€455–465 Cunningham SA, Kanzow T, Rayner D, Baringer MO, Johns WE, Marotzke J, Longworth HR, Grant EM, Hirschi J, Beal LM, Meinen CS, Bryden HL (2007) Temporal variability of the Atlantic meridional overturning circulation at 26.5°N. Science 317:935–938 Cummings JA, Bertino L, Brasseur P, Fukumori I, Kamachi M, Martin MJ, Mogensen KS, Oke PR,Testut C-E, Verron J, Weaver A (2009) Description of assimilation methods used in GODAE systems. Oceanogr Mag 22:96–109 De Mey P, Craig P, Davidson F, Edwards CA, Ishikawa Y, Kindle JC, Proctor R, Thompson KR, Zhu J, GODAE Coastal and Shelf Seas Working Group (2009) Application in coastal modelling and forecasting. Oceanogr Mag 22:198–205 Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanogr Mag 22:80–95 Donlon CJ, Casey KS, Robinson IS, Gentemann CL, Reynolds RW, Barton I, Arino O, Stark JD, Rayner NA, Le Borgne P, Poulter D, Vazquez-Cuervo J, Beggs H, Jones LD, Minnett P (2009) The GODAE high resolution sea surface temperature pilot project (GHRSST). Oceanogr Mag 22:34–45 Ganachaud A, Wunsch C (2000) Improved estimates of global ocean circulation, heat transport and mixing from hydrographic data. Nature 408:453–457 Gates WL (1992) AMIP: the atmospheric model intercomparison project. Bull Am Meteorol Soc 73:1962–1970 Griffies SM, Adcroft A, Banks H, Böning CW, Chassignet EP, Danabasoglu G, Danilov S, Deleersnijder E, Drange H, England M, Fox-Kemper B, Gerdes R, Gnanadesikan A, Greatbatch RJ, Hallberg RW, Hanert E, Harrison MJ, Legg SA, Little CM, Madec G, Marsland S, Nikurashin M, Pirani A, Simmons HL, Schröter J, Samuels BL, Treguier A-M, Toggweiler JR, Tsujino H, Vallis GK, and White L (2009) Problems and prospects in large-scale ocean circulation models. In: Fischer AS (ed) OceanOb’s 2009 Hackett B, Comerma E, Daniel P, Ichikawa H (2009) Marine oil pollution predication. Oceanogr Mag 22:168–175 Heimbach P, Forget G, Ponte RM, Wunsch C, Balmaseda MA, Awaji T, Baehr J, Behringer D, Carton JA, Ferry N, Fischer AS, Fukumori I, Giese BS, Haines K, Harrison E, Hernandez F, Kamachi M, Keppenne C, Köhl A, Lee T, Menemenlis D, Oke PR, Remy E, Rienecker M, Rosati A, Smith DE, Speer KG, Stammer D, Weaver A (2009) Observational requirements for global-scale ocean climate analysis: lessons from ocean state estimation. In: Fischer AS (ed) OceanOb’s 2009 Hernandez F, Bertino L, Brassington GB, Cummings JA, Crosnier L, Davidson F, Hacker P, Kamachi M, Lisæter KA, Mahdon R, Martin MJ, Ratsimandresy A (2008) Validation and intercomparison of analysis and forecast products. Final GODAE Symposium 2008: the revolu-
23â•… Performance of Ocean Forecasting Systems—Intercomparison Projects
655
tion in global ocean forecasting GODAE: 10 years of achievement. Nice, France, GODAE, pp€147–191 Hernandez F, Bertino L, Brassington GB, Chassignet EP, Cummings JA, Davidson F, Drévillon M, Garric G, Kamachi M, Lellouche J-M, Mahdon R, Martin MJ, Ratsimandresy A, Regnier C (2009) Validation and intercomparison studies within GODAE. Oceanogr Mag 22:128–143 Hurlburt HE, Brassington GB, Drillet Y, Kamachi M, Benkiran M, R. Bourdallé-Badie, Chassignet EP, Jacobs GA, Le Galloudec O, Lellouche J-M, Metzger EJ, Oke PR, Pugh TF, Schiller A, Smedstad OM, Tranchant B, Tsujino H, Usuii N, Wallcraft AJ (2009) High resolution global and basin-scale ocean analysis and forecasts. Oceanogr Mag 22:110–127 Larnicol G, Guinehut S, Rio M-H, Drévillon M, Faugère Y, Nicolas G (2006) The global observed ocean products of the french mercator project. International Symposium on Radar Altimetry: 15 years of altimetry, ESA/CNES Lee T, Awaji T, Balmaseda MA, Greiner E, Stammer D (2009a) Ocean state estimation for climate research. Oceanogr Mag 22:160–167 Lee T, Stammer D, Awaji T, Balmaseda MA, Behringer D, Carton JA, Ferry N, Fischer AS, Fukumori I, Giese BS, Haines K, Harrison E, Heimbach P, Kamachi M, Keppenne C, Köhl A, Masina S, Menemenlis D, Ponte RM, Remy E, Rienecker M, Rosati A, Schröter J, Smith DE, Weaver A, Wunsch C, Xue Y (2009b) Ocean state estimate from climate research. In: Fischer AS (ed) OceanOb’s 2009 Locarnini RA, Mishonov AV, Antonov JI, Boyer TP, Garcia HE (2006) World ocean atlas 2005, In: Levitus S (ed) Temperature, vol€1. U.S. Government Printing Office, Washington, p€182 Martin M (2011) Ocean Forecasting systems: product evaluation and skill. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. Springer, Dordrecht, pp 611–632 Meincke J, Le Provost C, Willebrand J (2001) DYNAMO. Progr Oceanogr 48:121–122 New AL, Barnard S, Herrmann P, Molines J-M (2001a) On the origin and pathway of the saline inflow to the Nordic Seas: insights from models. Progr Oceanogr 48:255–287 New AL, Jia Y, Coulibaly M, Dengg J (2001b) On the role of the Azores current in the ventilation of the North Atlantic Ocean. Progr Oceanogr 48:163–194 Roemmich D, Argo-Science-Team (2009) Argo: the challenge of continuing 10 years of progress. Oceanogr Mag 22:46–55 Stammer D, Böning CW, Dieterich C (2001) The role of variable wind forcing in generating eddy energy in the North Atlantic. Progr Oceanogr 48:289–311 Stammer D, Köhl A, Awaji T, Balmaseda MA, Behringer D, Carton JA, Ferry N, Fischer AS, Fukumori I, Giese BS, Haines K, Harrison E, Heimbach P, Kamachi M, Keppenne C, Lee T, Masina S, Menemenlis D, Ponte RM, Remy E, Rienecker M, Rosati A, Schröter J, Smith DE, Weaver A, Wunsch C, Xue Y (2009) Ocean information provided through ensemble ocean syntheses. In: Fischer AS (ed) Oceanob’s 2009 Willebrand J, Barnier B, Böning CW, Dieterich C, Killworth PD, Le Provost C, Jia Y, Molines J-M, New AL (2001) Circulation characteristics in three eddy-permitting models of the North Atlantic. Progr Oceanogr 48:123–161
Part VIII
Applications, Policies and Legal Frameworks
Chapter 24
Defence Applications of Operational Oceanography An Australian Perspective Robert Woodham
Abstract╇ Oceanographic conditions can affect naval operations in a variety of ways, and for this reason navies around the world have traditionally used oceanographic observations, and climatologies derived from them, for operational decision making. Rapid advances in global ocean observing systems since the 1990s, and more recently in operational ocean forecasting systems, offer substantial opportunities for improved decision making. The recent focus of many defence forces on information superiority has coincided with the availability of high resolution forecasts of oceanic physical properties. These oceanic data sets are being used to assess and forecast such properties as: sea surface height, temperature and salinity, for acoustic applications to undersea warfare; and oceanic currents and tidal streams, for Search and Rescue (SAR), mine warfare and amphibious applications. The Royal Australian Navy (RAN) is using ocean forecasts from the BLUElink global ocean modelling system, and a limited area ocean model, and is developing a very high resolution model for applications in the littoral zone, as well as integrating high resolution oceanographic data into sonar range prediction models. These military applications of operational oceanography are reviewed, and illustrated with examples from an Australian perspective.
24.1â•…Introduction It is perhaps self-evident that navies around the world are interested in ocean conditions. What may not be so obvious, however, is the variety of ways in which the ocean can affect naval operations. This chapter aims to describe oceanographic impacts on maritime operations, and how these can be assessed and forecast using operational oceanographic capabilities which have become available in recent years. Whilst its content is generally applicable to naval forces, it is presented from the viewpoint of the Royal Australian Navy (RAN), which has been closely involved R. Woodham () Directorate of Oceanography and Meteorology, Royal Australian Navy, Sydney, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_24, ©Â€Springer Science+Business Media B.V. 2011
659
660
R. Woodham
in the establishment of oceanographic observation and forecasting in Australia, as a partner in the ‘BLUElink’ project (Brassington et€ al. 2007). Jacobs et€ al. have recently published a more general overview of how operational oceanography is being used by navies throughout the world, which includes examples from the United States, the United Kingdom, France and Australia (Jacobs et€al. 2009). Harding and Rigney have previously published an overview of operational oceanography in the United States Navy (Harding and Rigney 2006). In general terms, military forces around the world have given increasing attention, in recent times, to the importance of basing decision making on the most comprehensive and up to date information available. This is partly due to the increasing capabilities offered by information and communications technologies (ICT), and partly due to a change in emphasis to manoeuvre warfare, rather than positional (or attritional) warfare. The focus on manoeuvre warfare has its roots in the latter part of the Cold War, when NATO realised that it must use force-multipliers if it was to overcome the numerical superiority of Soviet forces. These force multipliers included information superiority and the manoeuvrist approach. The related concept of ‘Network Centric Warfare’ (NCW), as distinct from a platform centric approach, envisages the rapid collection and dissemination of actionable information, using the latest technologies, to achieve information superiority throughout the battlespace. Environmental information, including Meteorological and Oceanographic (METOC) information, is regarded by modern navies as a vital component of information superiority and NCW, allowing naval forces to optimise their weapons, sensors and manoeuvre for the prevailing and forecast environmental conditions. For these reasons, the more technologically advanced world navies have been quick to take advantage of the recent rapid developments in operational oceanography, which have been described elsewhere in this volume. Improved oceanic observations, data management and forecast systems have all been applied to naval operations, in order to contribute to the goal of information superiority. This approach is particularly applicable in Australia, because oceanographic conditions in the region are so complex (see Fig.€ 24.1 for geographic locations referred to in this paragraph). The East Australian Current affects the Tasman Sea, spawning numerous warm- and cold-core eddies (Ridgway and Dunn 2003). The Leeuwin Current flows down the west coast and across the Great Australian Bight. The Pacific-Indonesian Throughflow affects the Timor and Arafura Seas and the Northwest Shelf. The Antarctic Circumpolar Current affects waters to the south of the Australian continent. Other oceanographic phenomena in the region include upwelling events (particularly along the Queensland coast, and the Bonney coast of South Australia), internal waves, solitons, extreme tidal ranges and abundant freshwater inflows, providing strong buoyancy forcing during the Northwest monsoon. Faced with the need to operate successfully in such complex waters, the RAN has been quick to appreciate the need to maintain a state-of-the-art oceanographic capability. It is working closely with partners, notably the Australian Bureau of Meteorology (BoM) and the Commonwealth Scientific and Industrial Research Organisation (CSIRO), to develop such a capability.
24â•… Defence Applications of Operational Oceanography
661
Fig. 24.1↜渀 Geographic locations in the Australian region referred to in the text
24.2â•…Impacts of the Ocean on Operations 24.2.1 Anti-Submarine Warfare (ASW) The need for specialist oceanographic expertise first came to be recognised by the RAN in the mid 1950s, when the Fairey Gannet Anti-Submarine Warfare (ASW) aircraft was first operated from the aircraft carrier HMAS MELBOURNE. Meteorological officers onboard MELBOURNE provided tactical oceanographic advice to the Gannet squadrons, using bathythermographic observations of the ocean as the basis for sonar performance predictions. This advice was used by the Gannet crews to determine the optimum deployment of buoys fitted with hydrophones (‘sonobuoys’), which they used in the acoustic detection and tracking of submarines.
662
R. Woodham
In making these acoustic assessments, the effects of the ocean’s thermohaline structure on the propagation of sound in water must be considered. The effects on sound speed of temperature, salinity and pressure are as follows: • Temperature—sound speed is higher in warmer water (4€ms−1 per 1°C) • Salinity—sound speed is higher in more saline water (1.4€ms−1 per 1 PSU) • Depth—sound speed is higher at greater pressure (1.7€ms−1 per 100€m) Acoustic propagation in water can be understood by imagining that sound propagates through a homogeneous medium in straight lines (the ‘raytrace’ approach) (Urick 1983). The refraction of sound rays is described by Snell’s Law, which states that, when a ray crosses a boundary between two media in which its speed of propagation ( v ) is different: sin θi v1 = , sin θr v2
where iâ•›, r are the angles of incidence and refraction. This means that sound in the sea is refracted towards areas of lower sound speed. The degree of refraction is also frequency dependent, being greater for higher frequencies. Snell’s Law can be applied qualitatively, to understand the acoustic properties of the water column, and hence determine optimum tactics, such as search or evasion plans. It can also be applied quantitatively, in sonar range prediction models, such as the RAN’s ‘Tactical Environmental Support System version 2’ (TESS 2). These models estimate detection ranges, based on ocean acoustics, the performance characteristics of sonar systems (such as operating frequencies, transmitted power, pulse length, processing losses and gains, etc), and a knowledge of target characteristics (such as target strength, depth, aspect, etc). Ray-tracing models are generally found to give good results at medium and high frequencies (above 1–2€ kHz). These frequencies are typically used by active sonars, which transmit a pulse of acoustic energy, and detect its echo (as distinct from passive sonars, which detect radiated noise from a target). Active sonars are fitted in ships and submarines, and can be deployed from aircraft as sonobuoys or, in the case of helicopters, on winches (‘dipping’ sonars). Consider a typical thermal profile of the ocean, such as the one taken from the central Tasman Sea shown in Fig.€24.2. This profile is from the ‘Ship Of Opportunity Programme’ (SOOP) dataset, and has been extracted from the Integrated Marine Observing System (IMOS) Ocean Portal. The top 20–30€m shows an isothermal profile in the mixed layer. Here, temperature and salinity are constant, but pressure will increase as depth increases, giving rise to a slight increase in sound speed. This will have the effect of refracting sound waves upwards towards the surface. If the sound frequency is sufficiently high, in comparison to the depth of the mixed layer, acoustic rays travelling through the water at small angles to the horizontal will be refracted upwards towards the surface, where they will be reflected. After reflection, the rays will again be refracted upwards towards the surface. This has the effect of trapping acoustic energy in the surface ‘duct’, which can give rise to low acoustic losses, and therefore long ranges. Because higher frequencies are refracted more, there is a ‘cut-off’ frequency, below which acoustic energy will not be trapped in the
24â•… Defence Applications of Operational Oceanography
663
Temperature Profile for the XBT (88605242) launched at lat/lon (-36.8165/156.9538) Line PX34 Sydney-Wellington on the: 27-Jan-2009 19:08:00 0 -100 -200
Depth in metres
-300 -400 -500 -600 -700 -800 -900 -1000 -1100 -3
0
5
10 15 20 Temperature in degrees Celsius
25
30
35
Fig. 24.2↜渀 Typical thermal profile of the ocean, taken from the central Tasman sea. (Data is from the Ship of Opportunity Programme (SOOP), and was obtained from the IMOS ocean portal)
duct. If the surface wind is light, surface losses due to scattering on reflection will be low, and very long ranges are possible. In order to take advantage of this ducting effect, hull-mounted active sonars in ASW frigates are generally designed to operate at frequencies which are high enough to be trapped by the surface duct, in order to maximise detection ranges against shallow submarines. Below the mixed layer, the water gets colder in the thermocline zone. Between the base of the mixed layer at around 30€ m, and the base of the thermocline at around 100€ m, the temperature has fallen by about 7°C (Fig.€ 24.2). This means that the sound speed will have increased by around 1.7€ms−1, due to the increasing pressure, but fallen by around 28€ms−1 due to the decreasing temperature. Overall there is a large decrease in sound speed, which means that acoustic energy will be refracted downwards. Between 100€m and 800€m, there is a drop in temperature of around 1°C per hundred metres (Fig.€24.2), which means that the sound speed will decrease by about 2.3€ms−1 per hundred metres. In the water column below the mixed layer, these various effects result in a downward-refracting profile, stronger in the main thermocline region, which means that acoustic energy will be refracted down towards the sea bed. If the sea bed is a good absorber of acoustic energy at the relevant frequency, acoustic propagation will be generally poor.
664
R. Woodham
Fig. 24.3↜渀 Sound speed profile through the Tasman sea at 155° East on 31 March 2008. (Data is from the Ocean Forecast Australia Model (OFAM). An anticyclonic eddy is evident at around 32°S)
Although the profile shown in Fig.€24.2 extends to around 850€m, the depth of water in this location is around 5,000€ m. Below 850€ m, there will come a point where the decrease in sound speed caused by the temperature lapse is negated by the increase in sound speed due to the pressure increase. In isothermal water, clearly the sound speed will increase with depth, and acoustic energy will start to be refracted up towards the surface. The consequences of an increasing sound speed at depth can be seen in Fig.€24.3, which shows a sound speed cross section through the Tasman Sea at longitude 155°E, from 30°S to 40°S. Temperature and salinity data has been obtained from the Ocean Forecast Australia Model (OFAM) (Brassington et€al. 2007) for 31 March 2008, and converted to sound speed using Mackenzie’s equation (Mackenzie 1981). There is a sound speed minimum at a depth of around 1,200€m, whilst below this depth, sound speed increases due to the small temperature lapse combined with the increasing pressure. At depth, the value of sound speed increases to be similar to that at the surface. The sound speed minimum at 1,200€m is associated with an acoustic channel, which is a very low-loss path. Above 1,200€m, sound tends to be refracted downwards, towards the channel axis. Below 1,200€m, sound tends to be refracted upwards, again towards the channel axis. A depth of 1,200€m would therefore be a good depth at which to position a hydrophone, in order to take advantage of this low-loss path in the acoustic detection of submarines.
24â•… Defence Applications of Operational Oceanography
665
Where the sound speed at depth exceeds the value at the surface, a ‘convergence zone’ may be experienced. This is a ring around the acoustic source, typically with a radius of around 25€miles, where sound is focussed by a caustic effect. This focussing of acoustic energy near the surface provides opportunities for greatly increased ranges, and it is even possible for multiple convergence zones to be present, giving even longer range detections. In addition to the vertical gradients of sound speed discussed so far, horizontal gradients of sound speed are caused by temperature and salinity gradients associated with fronts and eddies, and these can also have a large effect on the acoustic properties of the ocean. For example, an anticyclonic (warm core) eddy will have warmer water towards its centre, therefore there will be a lateral gradient of sound speed associated with the eddy. An anticyclonic eddy can be seen in Fig.€24.3, at around 32°S, with an associated sound speed maximum at a depth of around 200€m. If a surface ship outside the eddy is searching for a submarine in the centre of the eddy, using active sonar, the sound will be refracted away from the submarine, reducing the probability of detection. Similarly, if the ship and submarine are on the opposite sides of an oceanic front, detection ranges will be much reduced. Acoustic effects such as the ones described in this section have been well known, and have been the principal concern of naval oceanographers, for a long time. Recent advances in operational oceanography are starting to provide the highly detailed oceanic data required to enable acoustic assessments and forecasts to be made at greatly increased spatial and temporal resolutions, suitable for tactical applications. For example, a submarine wishing to evade acoustic detection can use such oceanographic data to identify a location in the thermocline beneath the mixed layer, where the water is not too deep and the bottom is a good absorber of low frequency noise. This will ensure that its radiated noise is directed down to the sea bed, where it is absorbed, hence minimising counter-detection ranges. ASW aircraft can use high-resolution oceanographic data to identify near-surface sound channels, deploying the hydrophones on their sonobuoys or dipping sonars in the channel in order to achieve the greatest possible detection ranges. Knowledge of the location of fronts and eddies enables ASW frigates to design the most effective search plans, armed with an accurate assessment of detection ranges. These are just a few examples of how the wealth of oceanographic data now available presents abundant opportunities for the ingenuity of naval oceanographers and tacticians to be stimulated. As well as temperature and salinity, ocean currents also have an effect on ASW, and should be considered by naval forces. Submarines can take advantage of currents to increase their speed over the ground, whilst keeping their engines at low power (and therefore operating quietly). In some cases, particularly in the Australian region, ocean currents can run at 3 or 4 knots (Roughan and Middleton 2002), so this effect can be significant. Ocean currents can also be taken into account in sonar range prediction systems, such as the RAN’s TESS 2, since they affect sound speed.
666
R. Woodham
24.2.2 Amphibious Warfare Amphibious operations can be very sensitive to weather and oceanographic conditions. The offloading of troops and equipment from specialist amphibious shipping to a beachhead involves transfers from ships to landing craft, and from landing craft to the beach itself. Most navies possess a range of relatively small watercraft for use during amphibious operations. Such activities are sensitive to sea state, swell and surf conditions, tidal streams, longshore currents and rips, which must all be assessed and forecast in order to ensure mission success. Many navies use sea, swell and surf models to predict oceanic conditions in the littoral environment, and hence assess their impact on amphibious operations. Figure€24.4 shows the output from an experimental implementation of the ‘Simulating Waves Nearshore’ (SWAN) wave model, and the US Navy’s ‘Surf’ model, which displays model output using a Geographic Information System (GIS). The model has been run over North Beach, Cronulla, which is on the east coast of New South Wales to the south of Sydney (Fig.€24.1). Figure€24.4 shows: significant wave height (grey contours) and direction (vectors); significant wave period (blue rasters); littoral currents (closely spaced arrows along the approach to the beach); wave trains (displayed in grey as representative wave crests); and breaker percentage (displayed as green for <1%, amber for 1–15% and red (surf zone) for >15%). This information can be used in the planning phase of an amphibious assault, to compare the suitability of
Fig. 24.4↜渀 Sea and surf conditions forecast for North beach, Cronulla. See text (Sect.€24.2.2) for an explanation of the symbology
24â•… Defence Applications of Operational Oceanography
667
various beaches for the operation, or to predict conditions at the beach at the time of the assault. Depending on the nature of the assault, a suitable beach may be required to have negligible surf and manageable longshore currents, although a single line of low, spilling surf may be tolerated. The location of the beach centre and approach lanes can also be chosen, using model output of this type, to avoid rips. A knowledge of the location and strength of longshore currents in the boat lanes, at the time of the assault, will help the landing craft crews to make a successful approach and beaching. The RAN is developing a high resolution forecasting system, called the ‘Littoral Ocean Modelling System’ (LOMS), which will provide sea, swell and surf predictions at greater resolution and fidelity, and over larger domains, than the SWAN/ Surf implementation described above. It will provide a three dimensional characterisation of the wave conditions, at resolutions in the order of tens of metres.
24.2.3 Mine Warfare Mine Warfare operations include mine hunting (using specialist sonars and Remotely Operated Vehicles (ROVs)), mine sweeping, and mine clearance diving. These operations are generally conducted in littoral environments, which can be challenging due to the complexity of ocean conditions. Tidal streams are often strong, turbidity can affect visibility, and variations in the bottom type and thermohaline structure can make acoustic detection difficult. In order to achieve good detection and resolution of small objects, mine hunting sonars typically operate at relatively high frequencies (hundreds of kHz). This means that typical detection ranges are quite low, and so ocean models with horizontal resolutions in the order of tens of kilometres are unable to provide adequate resolution of the oceanic structure for these applications. The RAN uses a limited area oceanic model, called the ‘Relocatable Ocean Atmosphere Model’ (ROAM), which is described in Sect.€24.4.2 below, to generate forecasts at resolutions down to 1 or 2€km and the LOMS model will provide even higher resolution in the near future. A mine warfare variant of the TESS 2 sonar range prediction software, called TESS 2€M, provides acoustic assessments at the scales required by mine warfare applications. The main demands on naval oceanographers supporting mine warfare operations are often to assess and predict wind waves, swells and currents. Currents, both at the surface and at depth, depend on the tidal regime, wind driven flow and influence of the current structure in the adjacent deep ocean basin, all of which can be modelled by systems such as ROAM. ROVs and divers may be limited by the surface conditions and the strength of these currents. Forecasts are used to identify windows of opportunity, when wind waves, swells and current strengths are low enough that such activities will not be unduly hampered. Conditions of high turbidity can also hamper diving operations by reducing visibility. The thermohaline structure of the littoral water mass is of interest to mine hunting operations, since it affects the performance of high-frequency mine hunting sonars. It can have a substantial impact, particularly in the case of a salt wedge estuary or where there is strong tidal
668
R. Woodham
modulation. River outlets can affect the thermohaline structure on short timescales, for example when thunderstorm activity or heavy rain causes a sudden increase in outflow. Naval oceanographers must also be mindful of weather patterns, which can affect ocean conditions, and must predict events such as a sudden increase in sea state due to a frontal passage.
24.2.4 Submarine Operations Submarine operations require a knowledge of the locations of fronts and eddies (see Sect.€24.2.1 above), and the general thermohaline structure of the ocean, in order to identify the best tactics for detection, attacks and evasion. The strength and direction of ocean currents is also required, for the purposes of manoeuvre. A knowledge of conditions at the surface, such as wind waves and swells, enables the risk of counterdetection to be assessed, informing decisions on whether it is safe to raise a periscope or communications mast, or to recharge batteries by ‘snorting’. A knowledge of surface wind waves and precipitation is also required, in order to assess ambient noise from these sources, as this affects the performance of acoustic sensors.
24.2.5 Search and Rescue (SAR) Analyses and forecasts of ocean currents are invaluable in the assessment of drift during Search and Rescue (SAR) operations, by informing the design of effective search plans. Perhaps the most complex aspect of such calculations are associated with the drift of the object being searched for under the influence of the wind (or ‘leeway’) (Hackett et€al. 2006). Objects with different shapes, such as persons wearing lifejackets, survival rafts and lifeboats, experience different leeway effects. Even without the assistance of algorithms which account for leeway effects, a good approximation can often be obtained from ocean models which include currents, tidal streams and Ekman flow. In addition, knowledge of sea surface temperatures (SST) allows survival times to be estimated. Ocean modelling systems can even be used to investigate historical problems of this type, such as the search for the location of HMAS SYDNEY II, which was greatly assisted by oceanic drift calculations using BLUElink reanalysis data (Mearns 2009; Griffin 2009). The SYDNEY wreck site was located off Western Australia in April 2008, 66 years after the ship sank, with the tragic loss of her entire ship’s company.
24.2.6 Maritime Interdiction Operations The bulk of the chapter so far has concentrated on high-end warfighting applications of operational oceanography, such as prosecuting submarines, clearing mine-
24â•… Defence Applications of Operational Oceanography
669
fields and conducting amphibious assaults. Oceanographic products are also used, however, to provide routine support to lower tempo operations. Examples include maritime interdiction, patrol tasks and constabulary activities, which may be constrained by high sea states or heavy swells. A current example is the use of real-time satellite observations, and forecasts, of significant wave height to identify the risk of pirate attacks off the coast of Somalia. The correlation between pirate attacks and satellite observations of significant wave height has been established using historical data. A ‘stoplight’ diagram, based on these correlations and using forecast wave heights, is routinely provided to naval forces in the international Combined Task Force 150 (CTF 150), operating off the Somali coast. This force includes an Australian frigate. The stoplight product shows the risk of pirate attack in three categories: ‘probable’, ‘possible’ and ‘unlikely’ (Fig.€24.5).
24.3â•…Forecast Methods—Their Strengths and Weaknesses 24.3.1 Climatology Until the recent advent of operational oceanography, navies have had to rely on climatologies or point observations to make operational decisions (Jacobs et€al. 2009). Climatologies can be useful for planning purposes, but they are of limited use where oceanic variability is high. In the extreme case of a bimodal system, climatology shows the mean of the two modes, which may be a physical situation that never arises in reality (e.g. south or north of a front, inside or outside an eddy). Figure€24.6 illustrates the limitations of climatology, by showing the September monthly mean SST in the Tasman Sea as depicted by the World Ocean Atlas 2001, and the daily mean SST on 16 September 2009 from the BLUElink forecasting system. Conversely, where variability is low, or where it occurs on timescales longer than the averaging period (normally monthly), climatology can give a very good indication of expected conditions. Furthermore, the expected error of a forecast based on climatology is independent of the lead time of the forecast (see Martin (2010, Fig.€8b)). Forecasts based on deterministic models or persistence perform better, on average, than climatology in the early part of the forecast period. This is illustrated in Fig.€8b of Martin (2010), which shows the median RMS errors of global Sea Surface Height (SSH) forecasts based on climatology, persistence and a deterministic model. Climatology gives better guidance than other forecast methods, such as persistence or deterministic models, at long lead times (Murphy 1992), because forecasts based on persistence or unbiased deterministic models asymptote to twice the climatological variance at long time periods. This is because, once these forecasts are completely decorrelated from reality, they have errors resulting from having anomalies in the wrong places, as well as errors from not having anomalies in the right places (Kalnay 2003). For this reason, defence forces normally use climatological oceanographic data when conducting long-range planning.
Fig. 24.5↜渀 36€h Forecast of risk of pirate attacks off the Somali coast, based on significant wave height, valid 28 August 2009
670 R. Woodham
24â•… Defence Applications of Operational Oceanography
671
Fig. 24.6↜渀 Comparison of SST as depicted by climatology (↜upper panel) (World Ocean Atlas 2001, September) and an oceanic model (↜lower panel). (BLUElink Reanalysis, 16 September 2009)
24.3.2 Persistence Point observations, such as temperature profiles from eXpendable Bathy Thermograph (XBT) systems, have been used by navies for many decades to infer the acoustic properties of the water column. These observations are relatively simple to make, and do not require assistance from ashore. This approach amounts to a persistence forecast, that is, an assumption that the water properties will not change during the period for which the assessment is required. It also assumes that there is no spatial variation in temperature, so only range-independent sonar predictions can be made using this approach. Persistence forecasts can be expected to have lower errors than climatology at the start of the forecast period, but as the oceanic flow evolves from this initial state, the errors grow rapidly (Murphy 1992). Spatial
672
R. Woodham
variations, caused by the physical movement of the ship or aircraft making the observation, as well as temporal variations due to ocean dynamics, contribute to these errors. Nonetheless, a persistence forecast may be useful where uniform conditions can reasonably be expected, such as in high latitudes where mixed layers are very deep, or over continental shelves, provided the water is well mixed, and advection is minimal. Persistence is a more valid approach for short lead time forecasts, such as may be required for a Co-ordinated Anti-Submarine EXercise (CASEX) lasting two or three hours, than for longer lead times. Nevertheless, where spatial and temporal variability is great, such as in the waters around Australia, a persistence forecast may be misleading even on very short timescales. An ASW frigate which makes an XBT observation just inside or outside an eddy, for example, may soon experience very different acoustic conditions from those inferred from the XBT observation.
24.3.3 Deterministic Forecasts Deterministic forecasts of the ocean have only become available in relatively recent times (Bell et€al. 2000). Nevertheless, rapid progress has been made over the last decade, including the introduction of eddy-resolving models and the assimilation of new observational data. See Brassington (Brassington 2010) for a comprehensive review of progress in operational ocean forecasting. In Australia, the BLUElink ocean forecasting system commenced routine forecasting operations in August 2007 (Brassington et€al. 2007). Provided sufficient observational data is available, deterministic forecasts should have relatively small errors at the start of the forecast period. These errors will grow more slowly than persistence forecast errors, because the deterministic model is able to keep up with changes in the state of the ocean, by modelling its dynamic processes (see Fig€22.7b, Martin (2010)). Deterministic forecasts from systems such as BLUElink are highly detailed, providing variables such as temperature, salinity, currents and sea surface height at high spatial resolution for forecast periods of several days. They represent a huge advance on the persistence and climatological forecasts used by navies for many decades. In one sense, however, their strength is also their weakness, since it is difficult to transmit the high volumes of oceanic data now available from deterministic ocean forecasting systems from shore to ships and submarines, due to the bandwidth limitations of naval communications systems.
24.3.4 Ensemble Forecasts Ensemble forecasting is well established for Numerical Weather Prediction (NWP), but less so for oceanographic forecasting. Ensemble techniques are used to generate covariance matrices for oceanic data assimilation applications (Oke et€ al. 2005), and some ocean models have tangent linear and adjoint versions, which can be used
24â•… Defence Applications of Operational Oceanography
673
to generate ensembles of initial conditions. Ensemble forecasts of the ocean offer great benefits to military users, since they enable the expected accuracy of forecasts to be quantified at the start of the forecast period. Ensemble techniques can also be used to provide probabilistic forecasts, which assist military commanders by enabling them to understand operational risks. The computational demands of implementing an operational ensemble forecasting system can, however, be substantial.
24.4â•…Naval Applications of Deterministic Forecasts The ocean analysis and forecasting capability of the BLUElink system has been described by Brassington (Brassington 2010). This Section will describe how forecasts from the BLUElink system, including ROAM, are used by the RAN for operational decision making.
24.4.1 The BLUElink Global/Regional Model (OceanMAPS) The BLUElink Ocean Modelling, Analysis and Prediction System (OceanMAPS) is implemented at the Bureau of Meteorology (BoM) in Melbourne (Brassington 2010). It produces an analysis and 6-day forecast of ocean temperature, salinity, currents, sea surface height and mixed layer depth twice per week. Model output graphics are available from the BoM public website, and the model data itself is available to the RAN, and more generally for research purposes, from the BoM’s ‘Thematic Realtime Environmental Distributed Data Services’ (THREDDS) server, in Network Common Data Form (NetCDF) format. The OceanMAPS system is currently configured to give eddy-resolving resolution (10€km horizontally) over the Australian region (90°E–180°E and 16°N–75°S). Within this domain, OceanMAPS data is routinely used by the RAN to create oceanographic charts, which are available to naval personnel for a range of applications, including ASW, amphibious and mine warfare, passage planning and spatial awareness. An example of a ‘METOC Oceanographic Forecast Summary’ (MOFS) chart is shown in Fig.€24.7. From the MOFS chart shown in Fig.€24.7, it can be seen that the East Australian Exercise Areas (EAXA), shown as blue polygons, are dominated by a large anticyclonic feature at the southern extremity of the East Australian Current. There is a sharp temperature gradient associated with this feature, at around 35°S, where reduced sonar ranges may be expected. For an ASW exercise in this location, the ASW commander might decide to allocate search assets either side of the temperature gradient, in order to achieve an efficient search. The submarine commander may chose to remain in the core of the current associated with this temperature gradient, in order to evade detection. By moving to either side for brief periods, sonar performance can be improved so that the tactical picture can be compiled. The
674
R. Woodham
Fig. 24.7↜渀 ‘METOC Oceanographic Forecast Summary’ (MOFS) chart, showing SST and currents in the Tasman Sea for 17 May 2010. MOFS charts are routinely produced twice weekly, showing forecasts out to 6 days, and are used by a variety of naval personnel
24â•… Defence Applications of Operational Oceanography
675
currents can also be exploited by the submarine to increase speed over the ground. Air assets deploying lines of sonobuoys can apply knowledge of the current field to ensure that the buoy patterns are not sheared out of shape by the flow. In addition, specialist oceanographers (‘METOC’ officers) may be available to provide further insights into the acoustic properties of the area, using OceanMAPS data, and hence assist decision making.
24.4.2 Relocatable Ocean Atmosphere Model (ROAM) The Relocatable Ocean Atmosphere Model (ROAM) is used by the RAN to generate high resolution oceanic and atmospheric forecasts over limited domains of interest to the Australian Defence Force (ADF). ROAM is designed to be set up by non-expert users, with minimal input, anywhere in the Australian region (Herzfeld 2009), and is used routinely by RAN forecasters. The ROAM ocean model is initialised and forced by data from OceanMAPS, and is typically implemented at resolutions of 1–2€km. Figure€24.8 shows Sea Surface Temperature (SST) and currents calculated by ROAM for a domain in the vicinity of Hobart, Tasmania (see Fig.€24.1), which was used for the RAN mine warfare exercise ‘DUGONG’. Exercise ‘DUGONG’ involved the Mine Hunter Coastal (MHC) vessels HUON and DIAMANTINA, which provided mine sweeping and hunting capabilities, the auxiliary minesweeper BANDICOOT, clearance diving teams and US Navy salvage divers. It took place over two weeks in October 2009 in the Derwent River, and the approaches to Hobart. In this example, the current characteristics were of primary importance to the exercise, which involved an underwater survey of the historic wreck of MV Lake Illawarra in the Derwent River. The water temperature was also of interest to the diving teams, to ensure that they were suitably prepared for the prevailing conditions. ROAM was used to generate current forecasts at intervals down to one hour. Additionally, the ROAM atmospheric model provided high resolution forecasts of the wind strength and direction, also at one hour timesteps, which allowed changes in the sea state to be anticipated. These also proved to have a significant impact on the exercise. Note that Fig.€24.8 does not show the full resolution of the ROAM model, as it has been expanded to show conditions in Storm Bay. As well as providing oceanographic data for graphical products, the output from ocean forecasting systems can be used in sonar range prediction systems, in order to produce assessments and forecasts of acoustic conditions which take account of the spatial and temporal variability of the ocean environment. Figure€24.9 shows a series of sonar range predictions, which have been generated by the RAN’s Tactical Environmental Support System (TESS 2) using ROAM data at 1€ km resolution. The domain is in the vicinity of Jervis Bay, which is around 130€km south of Sydney (Fig.€24.1). It is an area where the RAN frequently conducts ASW and MW exercises. The sonar range predictions are displayed as ‘Probability of Detection’ plots (PODgrams), where a 90% or greater probability of detection is shown in red.
676
R. Woodham
Fig. 24.8↜渀 Sea Surface Temperature (SST) and current forecast produced by the ROAM system for the mine warfare Exercise DUGONG in October 2009
Figure€24.9 shows ROAM sea surface temperature and currents as the background. The three PODgrams are for an ASW frigate leaving Jervis Bay and tracking to the northeast, searching for a submarine at Periscope Depth (PD). Similar calculations may be run at any depth required by the user. The capabilities of the sonar used for the calculation are fictional. The PODgrams seem to make sense intuitively, since they show the greatest ranges inshore, where the water is shallow and with a relatively homogeneous thermohaline structure, and bottom losses are low from the sandy sea bed. Offshore, where the temperature gradient is greater, detection ranges are less. The scale of Fig.€24.9 can be gauged by considering that the current vectors are shown at the ROAM resolution of 1€km. The PODgrams have hollow centres because echoes cannot be received whilst the sonar is transmitting. This gives rise to
24â•… Defence Applications of Operational Oceanography
677
Fig. 24.9↜渀 Sonar performance predictions produced by ROAM and TESS 2 for 1000 UTC on 06 October 2009, in the vicinity of Jervis Bay, NSW. The background shows sea surface temperature (↜colour stretch and contours) and current vectors. The three ‘Probability of Detection’ plots (PODgrams) are over-plotted, with a probability of detection of 90% or more shown in red
a ‘dead zone’ of varying radius, depending on the duration of the transmitted pulse, and the speed of sound in water.
24.5â•…Summary Oceanographic data has been collected by the world’s navies for many years, and used to inform the planning and conduct of a range of naval operations. Perhaps the main preoccupation of the naval oceanographer is with the acoustic properties of the ocean, because acoustic detection is of great importance in Anti-Submarine Warfare (ASW) and Mine Warfare (MW). The effects of oceanic temperature and salinity, and the depth of water, on sound speed are well known. This means that oceanographic conditions can be used to infer acoustic properties, both qualita-
678
R. Woodham
tively by naval personnel, and quantitatively in sonar range prediction systems. The relatively recent advent of operational oceanography has made available a wealth of observational and forecast data at high spatial and temporal resolutions. Recent advances span ocean observation systems, data assimilation and deterministic forecasting models. These new datasets are being used by naval oceanographers to provide much improved characterisations of the physical structure of the ocean, in order to inform operational and tactical decision making. The time and space scales which are starting to be resolved allow oceanographic support to be provided in complex littoral environments, where there is a demand from amphibious and mine warfare operations. This new oceanographic capability is timely, given a broader trend towards information superiority in the more technologically advanced defence forces. This chapter has described oceanographic effects on ASW, Amphibious Warfare, Mine Warfare, submarine operations and lower tempo activities such as Search and Rescue (SAR) and maritime interdiction operations. The acoustic properties of the ocean have been outlined in some detail, using examples from the Tasman Sea. The strengths and weaknesses of various forecasting methods (climatology, persistence, deterministic and ensemble forecasts) have been described, from the perspective of naval forces. Finally, some examples have been given of the use of deterministic forecasts of the ocean, including ASW activities in the Tasman Sea, a Mine Warfare exercise in the approaches to Hobart, Tasmania, and the use of high resolution oceanographic data to generate range-dependent sonar predictions in the Jervis Bay exercise areas. The maturing international capability for operational oceanography presents a remarkable opportunity for the world’s navies and maritime forces, and this has been seized on by the RAN and other leading navies. As the resolvable time and space scales continue to reduce, and progress is made with the downscaling of global systems to coastal scales, the complexities of the littoral environment will continue to be unravelled. It is an exciting time to be a naval oceanographer. Acknowledgments╇ I would like to thank the International GODAE Summer School organizing committee, for their kind invitation to present a lecture on defence applications of operational oceanography at the International GODAE Summer School in Perth, Western Australia, during January 2010. This chapter is based on the lecture. Sincere thanks also to Lieutenant Commander Aaron Young, RAN, for his kind assistance with some of the figures. I am also very grateful to Stephen Ban and Lieutenant Commander Richard Bean, RAN, for helpful suggestions which have improved the text.
References Bell MJ, Forbes RM, Hines A (2000) Assessment of the FOAM global data assimilation system for real-time operational ocean forecasting. J Mar Syst 25:1–22 Brassington GB (2010) System design for operational ocean forecasting. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. Springer, Dordrecht
24â•… Defence Applications of Operational Oceanography
679
Brassington GB, Pugh T, Spillman C, Shulz E, Beggs H, Schiller A, Oke PR (2007) BLUElink— development of operational oceanography and servicing in Australia. J Res Pract Inf Technol 39(2):151–164 Griffin D (2009) Locating HMAS Sydney by back-tracking the drift of two life rafts. Bull Aust Meteorol Oceanogr Soc 22(5):138–140 Hackett B, Breivik O, Wettre C (2006) Forecasting the drift of objects and substances in the ocean. In: Chassignet EP, Verron J (eds) Ocean weather forecasting, 1st edn. Springer, Dordrecht Harding J, Rigney J (2006) Operational oceanography in the US Navy: a GODAE perspective. In: Chassignet EP, Verron J (eds) Ocean weather forecasting, 1st edn. Springer, Dordrecht Herzfeld M (2009) Improving stability of regional numerical ocean models. Ocean Dyn 59:21–46 Jacobs GA, Woodham RH, Jourdan D, Braithwaite J (2009) GODAE applications useful to navies throughout the world. Oceanography 22(3):182–189 Kalnay E (2003) Atmospheric modeling, data assimilation and predictability. Cambridge University Press, Cambridge Mackenzie KV (1981) Nine term equation for sound speed in the oceans. J Acoust Soc Am 70(3):807–812 Martin M (2010) Ocean forecasting systems—product evaluation and skill. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. Springer, Dordrecht Mearns DL (2009) The search for the Sydney. HarperCollins, Sydney Murphy AH (1992) Climatology, persistence and their linear combination as standards of reference in skill scores. Weather Forecast 7:692–698 Oke PR, Schiller A, Griffin DA, Brassington GB (2005) Ensemble data assimilation for an eddyresolving ocean model of the Australian region. Q J R Meterol Soc 131(613):3301–3311 Ridgway KR, Dunn JR (2003) Mesoscale structure of the mean East Australian current system and its relationship with topography. Prog Oceanogr 56:189–222 Roughan M, Middleton JH (2002) A comparison of observed upwelling mechanisms off the east coast of Australia. Cont Shelf Res 22(17):2551–2572 Urick RJ (1983) Principles of underwater sound. McGraw-Hill Book Company, USA
Chapter 25
Applications for Metocean Forecast Data— Maritime Transport, Safety and Pollution Brian King, Ben Brushett, Trevor Gilbert and Charles Lemckert
Abstract╇ This lecture outlines the recent advances in the incorporation of oceanic and atmospheric forecast datasets into specialized trajectory models. These models are used for maritime safety purposes and to aid in combating oil and chemical marine pollution events. In particular, the lecture examines in detail the system assembled by the authors for improving oil spill trajectory models (OSTM) and chemical spill trajectory models (CSTM) as part of the Australian Maritime Safety Authority’s (AMSA) role in Australia’s national plan to combat pollution of the sea by oil and other noxious and hazardous substances. The main topics of this lecture will include: • A summary of metocean forecast datasets currently being used operationally in the Australian region; • The incorporation of tidal current dynamics into ocean forecasting models; • Three case studies of utilising metocean forecast datasets in maritime trajectory models, a study of the Australian Maritime Safety Authority’s OSTM and CSTM systems (OILMAP, CHEMMAP and the Environmental Data Servers) being. − The Pacific Adventurer oil and chemical Spill, offshore Brisbane; − The Montara Well Head Platform Blowout, Timor Sea; − The towing of MSC Lugano off Esperence (WA)
25.1â•…Introduction The operational use of metocean (meteorological and oceanic) forecast datasets is necessary for the effective response to search and rescue (SAR) incidents, mitigation of pollutant spills at sea (such as oil or chemicals), and for the response to other maritime hazards (such as towing a stranded vessel to safety). To effectively model the likely drift pattern of a person lost at sea, the movement of a marine pollutant B. King () Asia-Pacific ASA, PO Box 1679, Surfers Paradise, QLD 4217, Australia e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_25, ©Â€Springer Science+Business Media B.V. 2011
681
B. King et al.
682
spill, or a stranded vessel’s movements, both wind and ocean current forecast datasets are required. Among the ocean current forecast models in use operationally in the Australian and greater Asia Pacific region are the US Navy Coastal Ocean Model (NCOM) and the Australian BLUElink model. Both of these models were developed for large to mesoscale ocean circulation, and as such neither model includes the effects of tidal currents. This lack of tidal current forcing limits the effectiveness of the models in shallow near coastal waters, where tidal currents are important and can be the dominant driving force in water circulation. Asia-Pacific ASA have developed an aggregation tool which is able to incorporate the effects of both coastal tidal currents and large scale oceanic currents, producing an effective current forecast dataset for both open ocean and coastal waters alike. There are several wind forecast models available operationally; the two used in this study were the US Global Forecast System (GFS) and the US Navy Operational Global Atmospheric Prediction System (NOGAPS). Asia-Pacific ASA has a dedicated environmental data server (EDS) called COASTMAP EDS. This server downloads, catalogues, stores and disseminates environmental and metocean forecast and hindcast datasets for use with ASA modelling software (such as SARMAP, OILMAP and CHEMMAP). Table€ 25.1 below outlines the specifics of each of the metocean forecast models operationally available for the Australian region on the EDS. The availability of several different forecast models provides an excellent opportunity to compare the various model outcomes of a particular drift scenario. If the outcomes are similar, then there is consensus between the datasets, and the modeller can be confident that the forecast is as accurate as possible. If there is a discrepancy between the forecasts, then there is no consensus, which suggests that the forecast may not be as reliable. In such a situation it is necessary for the modeller to further revise the input data based on field observations to ascertain which may be the most reliable forecast. Operational consensus forecasting has been used successfully in meteorology; however its application in oceanographic forecasting has been minimal thus far. This however is changing, and the adoption of consensus forecasting in the oceanographic community is increasing. Several case studies of the operational use of consensus forecasting are outlined in the following sections. The first
Table 25.1↜渀 Operational metocean forecast models Model Type Temporal Spatial Resolution (h) Resolution NCOM Currents 6 1/8° BLUElink Currents 24 1/10°â•›≤â•›2° GFS NOGAPS
Winds Winds
6 6
1/2° 1/2°
Spatial Extent Global Effectively (90°E–180°E, 75°S–16°N) Global Global
Update Frequency daily 2â•›×â•›weekly
Forecast Length (h) 72 144
4â•›×â•›daily 4â•›×â•›daily
180 144
25â•… Applications for Metocean Forecast Data
683
Fig. 25.1↜渀 Map showing the location of the incidents, Pacific Adventurer oil and chemical spills, Montara well head blowout, and MSC Lugano towing
relates to the Pacific Adventurer oil spill which occurred off Moreton Island, Queensland; the second was the Montara oil well blow out in the Timor Sea, and the final was the towing of the MSC Lugano off Esperance in Western Australia (see Fig.€25.1).
25.2â•…Review of Meteorological and Ocean Forecast Models 25.2.1 BLUElink Ocean Model The BLUElink project became operational in 2007 from the collaboration between the Australian Bureau of Meteorology (BoM), Royal Australian Navy (RAN) and the Commonwealth Scientific Industry Research Organisation (CSIRO 2010). Operationally, it is now under the management of the Australian Bureau of Meteorology. There are several components to the BLUElink system, including operational forecasts, reanalysis and data assimilation. The operational forecasts from BLUElink used in this study were derived from the Ocean Model Analysis and Prediction System (OMAPS-fc). This system uses the Ocean Forecasting Australia Model (OFAM) which is based on the Modular Ocean Model version 4 (MOM4) (Andreu-Burello et€al. 2010). The 3D model has a resolution of 1/10° (~10€km) in the Australian region (90°E–180°E, 75°S–16°N), with up to 2° resolution elsewhere around the globe, to reduce computational costs. There are 47 vertical layers, with the topmost 20 layers being 10€m thick. (Australian Bureau of Meteorology 2007) Data assimilation is controlled by the BLUElink Ocean Data Assimilation System (BODAS) which is an ensemble optimal interpolation (EnOI) scheme that assimilates Sea Surface Temperature (SST), Sea Surface Height (SSH) and temperature and salinity profiles. Atmospheric fluxes are currently provided by the BoM Global Atmospheric Prediction System (GASP) (Brassington et€al. 2009). The BLUElink system provides up to 144€hour forecasts of the sea surface current velocities, at 24€hour intervals.
684
B. King et al.
25.2.2 NCOM Ocean Model The Navy Coastal Ocean Model (NCOM) is a 3D global ocean current forecast model which was developed by the Naval Research Laboratory (NRL) and was transitioned to be run operationally by the Naval Oceanographic Office (NAVO). The forecast model is based on the Princeton Ocean Model (POM) and has global coverage with a horizontal resolution of 1/8°. Vertical resolution is controlled by an σ–z coordinate system with 19σ-coordinate layers in the upper 137€m (topmost surface layer thickness of 1€m) and 21 z-coordinate layers from 137€m to 5,500€m. Data assimilation is controlled by the Modular Ocean Data Assimilation System (MODAS) which assimilates temperature, salinity and SSH. Atmospheric forcing is provided by the Navy Operational Global Atmospheric Prediction System (NOGAPS) atmospheric fluxes (Barron et€al. 2007). NCOM provides a 72€hour forecast of the sea surface current velocities, at 6€hour intervals.
25.2.3 GFS Atmospheric Model The Global Forecasting System (GFS) is a global spectral numerical model operationally run by the US National Oceanic and Atmospheric Administration (NOAA). The T254 version (used in this study) provides global coverage with a horizontal resolution of 1/2° with 64 unequally spaced vertical layers. GFS model output consists of 10€m U and V wind velocities with a forecast length of up to 180 hours and a temporal resolution of 6€hours (Environmental Modelling Centre 2003).
25.2.4 NOGAPS Atmospheric Model The Navy Operational Global Atmospheric Prediction System (NOGAPS) is a spectral general circulation model (GCM) which has been under constant development at the NRL over the last 20 years. It is the principal source of atmospheric forcing for the US Navy ocean models (eg. NCOM) and short term numerical weather prediction (NWP). NOGAPS uses a one way coupling system to capture ocean–atmosphere interaction. NOGAPS has global coverage, with horizontal resolution ~1/2°. The forecast length of the NOGAPS product is 144 hours with temporal resolution of twelve hours (at 00 and 12 UTC) and updates at 06 and 18UTC to enable background forecasts, which are used in the analysis. Outputs from the model include momentum flux, both latent and sensible heat fluxes, precipitation, solar and long wave radiation and surface pressure, as well as 10 metre U and V wind velocities (Rosmond 1992; Rosmond et€al. 2002).
25â•… Applications for Metocean Forecast Data
685
25.3â•…Case Studies of the Operational Use of Meteorological and Ocean Forecast Datasets Three case studies involving the operational use of metocean datasets were investigated. Two were in response to pollutant spills, the first was the Montara well head blowout in the Timor Sea, and the second was the Pacific Adventurer oil and chemical spills off Moreton Island in Queensland whilst the third case study presented herein was the towing support of the disabled MSC Lugano off Esperance in Western Australia. The two oil spill studies demonstrate how consensus modelling has been used operationally, and show when consensus was reached, and when it was not.
25.3.1 Case Study 1—Pacific Adventurer In the early hours of the morning on the 11th of March 2009 the Pacific Adventurer encountered severe weather conditions (as a result of nearby Tropical Cyclone Hamish) whilst on route from Newcastle to Indonesia. As a result of the severe weather conditions, 31 shipping containers (containing a total of approximately 600 tonnes of ammonium nitrate) were lost overboard. Several of the containers ruptured the ship’s fuel tanks, which resulted in the loss of 270 tonnes of heavy fuel oil to the marine environment (Asia-Pacific ASA 2009). At the request of the Australian Maritime Safety Authority (AMSA), Asia-Pacific ASA provided modelling support to the response teams to determine the likely fates and possible shoreline strikes of the heavy fuel oil (HFO) and the dissolved concentrations of the ammonium nitrate in the water column.
25.3.1.1â•…Oil Spill Forecast Panels in Fig.€25.2 show the various model runs completed using OILMAP to determine the likely trajectory of the HFO. Environmental forecast data was sourced from the COASTMAP EDS. Specifically NCOM and BLUElink forecast ocean currents aggregated with tidal currents provided the current forcing, whilst GFS and NOGAPS wind forecast models provided wind forcing. To account for variability in the inputs (such as wind gusts) uncertainty particles are included in the model runs. These uncertainty particles are subjected to winds and water currents that have been varied by up to ±30% of their strength and ±30° in direction. The black dots represent the likely surface oil locations, the white dots represent the water surface swept by the oil, the light grey represents the uncertainty particles used by the model, and the red indicates the full extents of the shoreline oil stranding, as reported by Maritime Safety Queensland.
686
B. King et al.
Fig. 25.2↜渀 The four different model runs completed when forecasting the Pacific Adventurer spill. Top BLUElink plus Tides, Bottom NCOM plus Tides, Left GFS winds, Right NOGAPS winds
As shown, there is a general consensus between the model forecasts. All four model forecasts show that the shorelines on the northern end of Moreton Island and the beaches near Kawana will be impacted, with the possibility of shoreline impacts to the beaches both north and south of the Kawana Beach region. The best correlation between the model predicted shoreline impacts and observed shoreline impacts was attained by using NCOM predicted currents aggregated with tidal currents, and the GFS forecast winds (bottom left panel of Fig.€25.2). 25.3.1.2╅Chemical Spill Forecast The simulation of a mass release of the entire contents of all overboard containers was completed using the CHEMMAP software. This was indicative of a worst case scenario where all 31 of the lost containers would rupture expelling ammonium nitrate over a period of 4€hours after hitting the seabed. NCOM plus tides and GFS winds were used as the forcing data for the CHEMMAP model run. The CHEMMAP system predicted that a release of 600 tonnes of ammonium nitrate would quickly dissolve in the water column. The results are shown below in Fig.€25.3, which describes the re-projected location of the reported incident and the projected path of the simulated ammonium nitrate spill over 96€ hours. The key indicates the dissolved concentration of the chemical in the water column in milligrams per cubic meter, from the surface to depths divided into five layers. The concentrations of ammonium nitrate within the water column fell to 1€mg/L (1,000€mg/m3) within 4 days following the event. Due to the near seabed release, dissolved concentrations remain near the bottom well away from the surface where they might enter Moreton Bay.
25â•… Applications for Metocean Forecast Data
687
Fig. 25.3↜渀 Pacific Adventurer chemical spill showing concentration and location of dissolved ammonium nitrate 96€hours after release
25.3.2 Case Study 2—Montara Well Head Blowout During the morning of 21st of August 2009, well control at the Montara well head was lost. The Montara well head is located approximately 680€km west of Darwin off the Kimberly coast in Western Australia. An estimate of 400 barrels per day of crude oil was being discharged into the sea. The leak continued for 74 days discharging a total of 30,000 barrels until the well was successfully “killed” on the 3rd November 2009 (PTTEP Australasia). Asia-Pacific ASA provided modelling support throughout this incident. At the beginning there was no consensus between the forecast models, with a different direction of travel predicted for the NCOM plus tidal currents, the BLUElink plus tidal current forecast data, and the GSLA plus tidal current data. The GSLA currents are generated from mapping Gridded Sea Level Anomalies, which provide geostrophic flow estimates. This approach gives a good representation of the general circulation of the ocean, however as the produced current field uses measurements of sea level anomalies that can be up to several days old, it essentially produces a nowcast of the sea state, rather than a forecast. This can work
688
B. King et al.
well for large scale circulation which takes time to set up, and has time scales of the order of weeks to months; however GSLA currents are not able to reproduce meso to small scale circulation which have time scales of hours to days (CSIRO 2010). GSLA currents do however provide a good reference to validate forecast model (NCOM and BLUElink) performance at recreating the oceanic circulation. Two surface drifters were deployed to provide observed estimates of the currents. These revealed that the currents were tidally governed (as shown by the oscillations in the buoy trajectories). This indicates that for successful prediction of drift patterns of objects or oil in this region, the addition of the tidal component to the surface currents is vitally important. As the incident continued, the forecast datasets proved to better resolve the surface currents in the region when compared to several other drifter tracks, the location of predicted surface oil and observed surface oil, and when directly comparing the NCOM and BLUElink forecast current vectors with hindcast currents. Of the 13 weeks that the oil was tracked, approximately 10 weeks returned very good current forecast data. Each dataset (NCOM, BLUElink and GSLA) was tested against the over flight and satellite imagery to ensure the best forecasts were produced. Table€25.2 below shows the periods throughout the 92 days of the incident (from 21st August 2009 until 23rd November 2009) for which dataset was found to produce the most accurate forecast of oil movement. Forecast bulletins were produced routinely throughout the Montara event by APASA to outline the expected operational conditions, and likely whereabouts of oil. Refer to Appendix A for the reproduction of one of these forecast bulletins (for 29th October 2009).
25.3.3 Case Study 3—MSC Lugano Stranding The MSC Lugano is a 240€m container ship which was en route from Adelaide in South Australia to Fremantle in Western Australia. On the 31st of March 2008 it was disabled by an engine room fire and as a result, was in jeopardy of grounding off Esperance, Western Australia. Three tugs from nearby Esperance were called in to provide assistance, whilst another larger and better equipped tug was en route from Fremantle. The tugs took Table 25.2↜渀 Metocean forecast products used during the Montara well head blowout for oil spill forecast modelling Start End Days Wind Current 21/08/2009 30/10/2009 10 GFS GLSA+Tides 30/08/2009 27/10/2009 57 GFS BLUElink+Tides 27/10/2009 06/11/2009 10 GFS NCOM+Tides 06/11/2009 11/11/2009 5 GFS GSLA+Tides 11/11/2009 23/11/2009 12 GFS BLUElink+Tides
25â•… Applications for Metocean Forecast Data
689
Fig. 25.4↜渀 Snap shot of surface currents off Esperance Western Australia
the MSC Lugano in tow however they were not designed or equipped for deep ocean towing and ran into difficulty off Pt D’Entrecastreux whilst on a passage northward to Fremantle. The vessels were not making any headway due to very high surface current speeds and were at risk of losing the tow (Australian Transport Safety Bureau 2009). The Western Australian authorities advised the vessels to proceed further offshore into deeper water in an attempt to avoid the high current speeds and coastal hazards. However consensus ocean current forecast data (NCOM and BLUElink) indicated stronger currents offshore when compared to inshore. Upon further inspection of the forecast currents it was deemed that the tow remain closer to the shore in the more favorable current conditions. The tow was successfully completed on the 13th of April 2008. Figure€25.4 below shows a snap shot of the surface currents in the region at the time of the towing. Note the stronger southerly currents offshore of Cape Leeuwin, compared to the currents closer inshore to Cape Leeuwin.
25.4â•…Conclusions The growing view is that oceanographers should follow the best-practice methodology used by weather forecasters to take full advantage of the multiple wind and ocean forecasting datasets available. This is made particularly evident through the three case studies investigated above. Weather forecasters use all available datasets and assess each of them to develop a consensus of opinion from the various weather forecast models on what might occur. With multiple ocean forecasting datasets
690
B. King et al.
available now, the same approach can be applied, for example oil spill models rely on good forecasts of both currents and weather to accurately predict the oil’s future drift and potential impact zones. Both winds and currents are used as input data to ASA’s OILMAP and CHEMMAP spill models and have been able to successfully predict the movement of oil or chemicals over time if the forecast winds and currents have been accurate. The latest approach is to run the same spill scenario with different datasets. When consensus between forecast models is reached, the outcome gives a higher level of confidence in the spill predictions. If different forecast datasets result in disparate trajectories and outcomes, then there are multiple viable outcomes, and a low level of confidence in any one prediction. The spill forecasts can then be issued with a confidence indicator, based on the degree of consensus obtained from the multiple analyses performed. Field observations such as aircraft over flights, drifting buoys, or satellite-derived observations can all be used to help estimate errors in the forecast data. One such reason for not attaining consensus between forecast models is the location or positioning of mesoscale eddies. Mesoscale eddies have spatial extents in the order of tens of kilometers, where large scale eddies tend to have a spatial extent of greater than 100€km. As the two aforementioned global current forecast models (NCOM and BLUElink) have spatial resolutions of approximately 10€km they are essentially semi-mesoscale eddy resolving models. To adequately resolve mesoscale eddies, a resolution in the order of 5–6€km at a minimum is required. Problems arise with semi-mesoscale eddy resolving models when eddies are misplaced or even absent completely. Acknowledgements╇ This research was supported under the Australian Research Council’s Linkage Projects funding scheme LP0991159.
Appendix pill Forecast Bulletin for Montara Incident Issued Midday S 29-October-2009 for the Australian Maritime Safety Authority Over flight and satellite observations collected from the 24th–28th October 2009 have been used to update oil, oil patches and wax positions within the AMSA OILMAP Oil Spill Trajectory Model (OSTM). The recent satellite observations indicated that the slick was patches of oil/wax lying east and southeast of Montara extending to the south as patches (refer to Fig.€ 25.5). The winds have remained favourable over recent days which has seen the edge of the slick move parallel to the coast north-eastward rather than towards to coast. Using these observations, the latest wind and ocean forecast data has been incorporated to provide “search areas for oil and wax” for midday (Darwin Time) on the 30th and 31st of October 2009, as shown in Figs.€25.6 and 25.7 below. Please note that the brown dots in the figures
25â•… Applications for Metocean Forecast Data
691
Fig. 25.5↜渀 AQUA Satellite Observation at UTC0500 28th October 2009. The darker colour within the red circle is indicative of surface oil slick; the white colour within the yellow circle indicates cloud
Fig. 25.6↜渀 Forecast of surface oil (as represented by the orange spots) at 12€pm on the 30th October 2009. The surface currents are shown by the coloured arrows and the wind conditions are shown by the wind barbs
692
B. King et al.
Fig. 25.7↜渀 Forecast of surface oil (as represented by the orange spots) at 12€pm on the 31st October 2009. The surface currents are shown by the coloured arrows and the wind conditions are shown by the wind barbs
below indicate “search areas for oil and sheen”. The density of the brown dots in the figure below indicates the likelihood of finding oil or wax in the various locations around the Montara well site. Due to the containment and dispersant operations, far field predictions are typically for defining search areas for scattered weathered oil and wax patches which may no longer be visible on the water’s surface, hence this forecast is potentially a ‘worst-case’ depiction of the spill at this time. The wind conditions for Montara are expected to be north-westerly winds (4–12€ kts) for 30th October 2009, weakening from the north for 31st October, 2009. At the Montara well site, tidal oscillations are expected to be weak as we move through the neap tidal phase in the Timor Sea. The slick will generally drift southward over the forecast period. Fresh oil flows at Montara are predicted to be as follows: • 30th Oct 2009: Weak SSE flow at 9€am; Weak SSW flow at 3€pm (4–12 knot NW winds); • 31st Oct 2009: Weak SSW flow at 9€am; Weak NW flow at 3€pm (weak northerly winds); To the far north in deep waters (The Timor Trench), the Indonesian Thru Flow current continues to flow strongly WSW. This strong flow is now spinning anticlockwise current eddies along the northern shelf-break which are moving position, allowing deepwater flows to spill over the shelf and drive the slick around Montara generally southward over the forecast period.
25â•… Applications for Metocean Forecast Data
693
At Ashmore, Hibernia and Cartier Reefs, the forecast indicates that previously reported small patches of weathered wax will remain in the vicinity of Ashmore and Cartier Reefs. These patches were reported with dimensions of 50â•›×â•›50€m or less. For waters between the West Atlas rig and the Kimberly coastline, the forecast indicates that the oil patches should drift slowly southward. The southeastern most position of this part of the slick (last described as very scattered small patches of wax) will remain north of Holithuria Banks. These patches may no longer be visible on the water’s surface, and are not expected to reach any shorelines during the forecast period (APASA forecast bulletin 2009).
References Andreu-Burello I, Brassington G, Oke P, Beggs H (2010) Including a new data stream in the BLUElink Ocean Data Assimilation System. Aust Meteorol Oceanogr J 59:77–86 APASA forecast bulletin (2009, 29 October) Report provide at the request of the Australian Maritime Safety Authority duing the Montara Response Asia-Pacific ASA (2009) Independent assessment of the shoreline cleanup operations for the pacific adventurer oil spill. Asia-Pacific ASA report to Maritime Safety Queensland Australian Bureau of Meteorology (2007) BLUElink> Ocean model analysis and prediction system version 1.0 (OceanMAPSv1.0) technical specification. Canberra. Available from: http:// bom.gov.au/oceanography/forecasts/technical_specification.pdf. Accessed 5 March 2010 Australian Transport Safety Bureau (2009) Independent investigation into the engine room fire on board the Marshall Islands registered container ship MSC Lugano off Esperance Western Australia. Canberra. http://www.atsb.gov.au/media/51269/mo2008004.pdf. Accessed 19 March 2009 Barron CN, Birol Kara A, Rhodes RC, Rowley C, Smedstad LF (2007) Validation test report for the 1/8° global navy coastal ocean model nowcast/forecast system. Naval Research Laboratory, Stennis Space Centre Brassington GB, Pugh T, Oke PR, Freeman J, Andreau-Burrel I, Huang X, Warren G (2009) Operational ocean data assimilation for the BLUElink Ocean Forecasting System. Fifth WMO Symposium on the Assimilation of observations for meteorology, oceanography and hydrology, Melbourne, 5–9 Oct 2009 CSIRO (2010) Ocean surface currents and temperature news. http://www.cmar.csiro.au/remotesensing/oceancurrents/index.htm. Accessed 22 March 2010 Environmental Modelling Centre (2003) The GFS Atmospheric Model 28. http://www.emc.ncep. noaa.gov/gmb/moorthi/gam.html. Accessed 5 March 2010 PTTEP Australasia (2009) Frequently asked questions montara incident. West Perth. http://www. au.pttep.com/faq.asp#Q3. Accessed 2 Jan 2010 Rosmond TE (1992) The design and testing of the Navy Operational Global Atmospheric Prediction System. Weather Forecast 7:262–272 Rosmond TE, Tiexiera J, Peng M, Hogan T (2002) Navy operational global atmospheric prediction system (NOGAPS): forcing for ocean models. Oceanography 15(1):99–108
Chapter 26
Marine Energy: Resources, Technologies, Research and Policies John Huckerby
Abstract╇ Marine energy technologies have enjoyed a resurgence of development since the late 1990s and there are now widespread international activities to develop marine energy technologies and project deployments, principally in mid-latitude countries, where wave and tidal stream resources are more energetic. Substantial new deployments of tidal barrages, essentially comprising hydroelectric technologies driven by seawater, are under evaluation or construction in a number of countries. Technologies for extraction of heat energy from seawater by Ocean Thermal Energy Conversion (OTEC) and submarine geothermal energy are being developed more slowly, as are technologies for harnessing energy from salinity gradients and production of bio-fuels from marine biomass.
26.1â•…Introduction Marine energy technologies have enjoyed a resurgence of development since the late 1990s and there are now widespread international activities to develop marine energy technologies and project deployments, principally in mid-latitude countries, where wave and tidal stream resources are more energetic. Substantial new deployments of tidal barrages, essentially comprising hydroelectric technologies driven by seawater, are under evaluation or construction in a number of countries. Technologies for extraction of heat energy from seawater by Ocean Thermal Energy Conversion (OTEC) and submarine geothermal energy are being developed more slowly, as are technologies for harnessing energy from salinity gradients and production of bio-fuels from marine biomass. Early deployments have so far reported few environmental issues but such concerns must be comprehensively addressed. Extensive research and monitoring of deployments continues to ensure that impacts are avoided or minimized.
J. Huckerby () Power Projects Limited, Panama Street, PO Box 25456, Wellington 6146, New Zealand e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_26, ©Â€Springer Science+Business Media B.V. 2011
695
696
J. Huckerby
Governments are key investors in early-stage technology developments from research and development (R & D) and proof-of-concept developments to precommercial prototypes. They also set the regulatory framework, in which devices can be deployed and developed. This framework can contain a range of incentives designed to support maturing technologies through to commercial development. Marine energy developments are taking place in over 30 countries and the initial focus in NW European countries has broadened to become truly international. The most advanced developments in technologies, collaborative research and policies continues to occur in NW Europe. Nonetheless, at the time of writing, less than 300€MW of installed ocean energy generating capacity is presently operational (enough electricity to supply approximately 80,000 households annually). This paper begins by outlining the forms of ocean energy and the distribution of ocean energy resources. The development of ocean energy technologies is intimately linked to these resources and the operation of these technologies will have an impact on the surrounding environment. Space and resources of ocean energy will eventually need to be allocated in competition with other use of marine space and resources. To compete with existing energy generation technologies, ocean energy technologies—like other new generation technologies—will need a favourable political framework, through R & D grants, capital support, tariffs and allocation regimes, to promote and accelerate their contribution to global energy supply portfolios.
26.2â•…Forms of Ocean Energy For the purposes of this paper, ocean energy resources are defined as those energy resources, which use seawater as either the motive power or for its chemical or heat potential. There are at least six principal forms of ocean energy, which could be harnessed to produce electricity or other products. These forms are: 1. Wave Energy 2. Tidal Energy a. Tidal Rise and Fall b. Tidal Streams 3. Ocean Current Energy 4. Ocean Thermal Energy a. Ocean Thermal Energy Conversion (OTEC) b. Submarine Geothermal Energy 5. Salinity Gradient 6. Marine Biomass Some authors consider offshore wind energy as a form of ocean energy but it is derived from the movement of winds, rather than the kinetic movement of seawater. Offshore wind energy is thus not really a form of ocean energy and is not considered further here.
26â•… Marine Energy: Resources, Technologies, Research and Policies
697
The remaining seven forms of ocean energy have been under investigation for over 100 years and, with the exception of adaptation of hydro-electric dam technology for tidal barrages, are still relatively underdeveloped. Nonetheless, these disparate forms of energy are globally distributed and may offer significant opportunities to supplement or displace existing generation sources, particularly as costs for fossil fuel energy sources rise. The products of these forms of energy can be used for a number of different purposes: 1. Generation of electricity (AC and DC) 2. Production of pressurized and potable water 3. Production of heat 4. Production of hydrogen 5. Production of bio-fuels A number of ocean energy technologies are being developed to produce potable water, either directly or via generation of electricity to drive desalination plants (Jalihal and Kathiroli 2009). Ocean Thermal Energy Conversion (OTEC) is also being developed for use as seawater air conditioning, ‘district cooling’ and seawater enrichment of onshore mariculture operations (Nihous 2009).
26.3â•…Ocean Energy Resources Potential energy resources available from the ocean significantly exceed worldwide demand for energy but they are not presently accessible, either technically or as economically competitive alternatives to current low cost energy sources—coal, oil, gas and geothermal energy. Increasing the contribution of ocean energy to meet growing international energy demand will require substantial investment in research and development, demonstrations, deployments and diffusion of commercial technologies. Nonetheless, all forms of ocean energy are emissions-free (barring construction, deployment and decommissioning activities) and will become cheaper and more attractive alternatives to the presently predominant forms of fossil fuel as emissions trading, carbon taxes and the full cost of externalities are priced into the cost of fossil fuel sources. The limitations on the extent to which ocean energy will be developed are factors such as unit cost of generated electricity (compared with other renewable technologies), reliability and operations and maintenance costs.
26.3.1 Wave Energy Wave energy is present across the globe and can be harnessed as a combination of kinetic and potential energy of water particles. Waves are created by the action of winds passing over the surface of the ocean. Wave heights (and thus energy) are greatest in the sub-equatorial regions where the trade winds (such as the ‘Roaring Forties’) are strong and blow consistently in the same direction over long distances (Fig.€26.1).
Fig. 26.1↜渀 Global distribution of annual mean wave power. (Cornett 2008)
698 J. Huckerby
26â•… Marine Energy: Resources, Technologies, Research and Policies
699
In addition to the geographic variability indicated by Fig.€26.1, there are also seasonal and shorter-term variabilities in wave regimes, brought about by weather systems. In the higher latitude areas, background wave regimes may be sufficient to permit almost continuous generation. Extreme wave conditions, brought about by storms, may provide greater energy than average conditions but this energy may not be extractable if wave energy devices have to go into ‘survival’ mode. Nonetheless, waves are essentially integrated wind energy, which are thus more predictable than winds. Wave energy farms may produce more forecastable energy, thus enabling project developers to secure higher prices for their produced power. There are a number of computational wind wave models, which are used for wave forecasting (Greenslade and Tolman 2010).
26.3.2 Tidal Energy Tidal energy can be divided into two distinct forms: 1. Tidal rise and fall 2. Tidal streams or currents 26.3.2.1â•…Tidal Rise and Fall Tidal rise and fall energy is potential energy derived by height changes in sea level, caused by the gravitational attraction of the moon, the sun, and to a lesser extent other astronomical bodies, on oceanic water bodies. The effects of these tides are complex and most major oceans and seas have internal tidal systems, called amphidromic systems (Fig.€26.2). Each major ocean has its own internal circulation system, called ‘gyres’, which rotate anticlockwise in the northern hemisphere and clockwise in the southern hemisphere. There are a number of different tidal components, which operate at different phases (called Kelvin waves), the largest being the M2 component, due to the moon. Tidal range energy can be best harnessed nearshore, particularly in estuaries, where tidal rise and fall can be amplified as coastal waters shallow towards the coast. 26.3.2.2â•…Tidal Stream Energy The movement of ocean water volumes, caused by the changing tides, creates tidal stream energy. Kinetic energy can be harnessed usually near shore or particularly where there are constrictions, such as straits, islands and passes. Tidal stream energy results from local regular (diurnal at roughly 24€h periods and semi-diurnal at 12€h 25€min periods) flows caused by the tidal cycle. Spring tides occur when the gravitational attraction of the sun and moon act in the same
700
J. Huckerby
Fig. 26.2↜渀 Global distribution of tidal amplitude, i.e., M2 tidal constituent. (Ray 2007)
direction, whilst neap tides result when the attraction of the sun and moon operate in opposite directions. These tides are forecastable on an 18.5-year cycle. These tides cause kinetic movements, which can be accelerated near coasts, where there is constraining topography, such as straits between islands (Soerensen and Weinstein 2008).
26.3.3 Ocean Currents Open ocean current systems are driven by the latitudinal distribution of the winds and have a clockwise circulation in the northern hemisphere and a counter-clockwise circulation in the southern hemisphere. Such wind-driven currents operate at shallow depths (<800€m). Ocean surface currents, such as the Gulf Stream, are more constant and continuous flows than tidal currents. Although surface ocean currents are subject to seasonal variations and currents do move geographically, these currents are generally considered to provide potentially stable, long-term power production, i.e., baseload electricity. Deeper ocean current systems result from thermal and salinity gradients, which produce slow-moving and deeper currents. Such currents are part of the thermohaline convection system, a global system of density-driven currents that transfer warm water from equatorial regions to the poles and return cold water from the poles to the equator (Fig.€26.3). Operational ocean forecast systems can be used to predict the distribution and variability of these current systems (Dombrowsky et€al. 2009).
26â•… Marine Energy: Resources, Technologies, Research and Policies
701
Fig. 26.3↜渀 Major surface ocean currents. (NOAA 2008)
26.3.4 Ocean Thermal Energy Ocean thermal energy is the heat energy in seawater. Differences in heat energy between different bodies of water can occur through temperature profiles in the water column or through heat introduced from submarine volcanic (i.e., geothermal) activity. Both sources are being investigated for future development. 26.3.4.1â•…Ocean Thermal Energy Conversion (OTEC) OTEC is a process or technology, which uses temperature differences between surface seawater and deep (>1,000€m) seawater to drive heat exchange processes (Nihous 2009). In most oceans or seas there is a marked drop in temperature between the surface and deeper water. This difference is most marked in tropical regions (between the Tropics of Capricorn and Cancer), where surface temperatures can be up to 20°C higher than underlying waters. Ocean thermal anomalies occur on a (Fig. 26.4) seasonal basis but there is increasing evidence of decadal anomalies in most oceans (Alves et€al. 2010). Whilst the seasonal variations are reasonably forecastable, longer-term variability is presently not fully understood, so forecasting is more uncertain. 26.3.4.2â•…Submarine Geothermal Energy In the 1980s ‘black smokers’ were discovered at mid-ocean ridges in the Atlantic Ocean. Black smokers are submarine geothermal systems, circulating seawater through hot, fractured volcanic rock, which is being formed and exposed at these mid-ocean ridges, where the earth’s ocean floors are expanding (Nihous 2010; also
Fig. 26.4↜渀 Worldwide average ocean temperature differences between 20 and 1,000 m water depths (colour palette is from 15º – mauve – to 25º C – red)
702 J. Huckerby
26â•… Marine Energy: Resources, Technologies, Research and Policies
703
Fig. 26.5↜渀 Volcanic activity at mid-ocean spreading ridges. (Gaba 2009)
see Alcocer and Hiriart 2008). The circulated seawater emerges from the volcanic rocks containing a range of minerals, which may contain gold, silver, copper, lead, zinc and other precious and rare metals. They also contain substantial quantities of heat, often reaching the seafloor at temperatures in excess of 350°C. It is this heat energy and the unusual mix of minerals that lead to the spectacular and unusual faunal assemblages, which are common at black smoker sites. Since the first discoveries, most mid-ocean ridges have been found to contain black smoker sites, some at remarkably shallow water depths. Some mid-ocean ridges come close to shore as, for example, does the Tonga-Kermadec Arc to the north coast of the North Island of New Zealand or the spreading ridge at the northern edge of the Gulf of California (Fig.€26.5). Like onshore geothermal energy, submarine geothermal energy should be forecastable and produce baseload electricity. Although individual submarine geothermal vents are ephemeral, regional production of geothermal fluids is usually predictable and forecastable.
26.3.5 Salinity Gradients Seawater is approximately 200 times more saline than fresh river water, derived from rain, snowmelt or groundwater and is delivered to the coast by major rivers. Global salinity differences arise from submarine and surface current movements (Fig.€26.6). The relatively high level of salinity in seawater thus establishes a pressure potential with sweet river water, which can be used to generate electricity or derive fresh (drinking) water from the seawater. This ‘osmotic’ pressure differential—equivalent to a hydraulic head over 120€ m—can be harnessed and used to drive a conventional Pelton Wheel turbine to generate electricity. It can also be used as a chemical potential to generate electricity directly.
704
J. Huckerby
Fig. 26.6↜渀 Average global sea surface salinity. (NASA 2009)
Whilst freshwater salt content is unlikely to be seasonably variable, the same is not true for ocean salinity. There is strong evidence of seasonal variability of salinity, which may also be related to decadal variations, such as the El Nino Southern Oscillation (Alves et€al. 2010). However, the impact of these seasonal and decadal fluctuations may be relatively small on potential power production. In any event the technologies are not sufficiently developed to confirm that salinity variation will lead to seasonal variations in energy production.
26.3.6 Marine Biomass The oceans are the largest source of biomass on earth. Man utilizes relatively little of this biomass, although overfishing of some species has rendered them endangered. Onshore biomass is principally used for the production of biofuels, although there are increasing concerns about the planting of biofuel crops to displace food crops.
26.4â•…Ocean Energy Technologies The range of ocean energy technologies is huge and varied for a number of reasons: 1. There are a number of different forms of ocean energy 2. There are many different ways to extract energy from seawater 3. Ocean energy technologies are at an early stage of development and a wide range of experimentation is continuing. 4. No dominant technologies have yet emerged
26â•… Marine Energy: Resources, Technologies, Research and Policies
705
Because there is a wide range of options for energy extraction and no dominant technologies, it is unlikely that ocean energy technologies will converge on a single device type, equivalent to the monopole tower, horizontal axis wind turbine generator with an upwind three-bladed rotor, which characterize the majority of wind turbines. Seawater is approximately 830 times denser than air at sea level. Consequently devices which seek to extract potential or kinetic energy from seawater movements are likely to be substantially smaller and more robust than wind turbines. The forces exerted by seawater are much greater than forces exerted by the wind. Presently the only nominally commercial ocean energy technology is the tidal barrage, which is effectively an existing technology—a hydroelectric plant—which utilizes the tidal range in river mouths, estuaries or embayments to generate a hydraulic head during either or both the ebb and flood tides, which can be used to generate electricity. All other ocean energy technologies have, at best, reached the pre-commercial demonstration phase and have yet to become commercial. However, considerable investment and research effort is being expended worldwide and new technologies are approaching commercial deployment, particularly wave and tidal stream technologies.
26.4.1 Classification of Ocean Energy Conversion Technologies There are a number of classification schemes for ocean energy conversion technologies. A primary classification can be made based upon the basic energy resource being harnessed: 1. Potential and kinetic energy in waves and currents 2. Chemical potential of seawater (salinity gradients) 3. Heat potential of seawater (ocean heat and geothermal heat) 4. Biological potential of seawater 26.4.1.1â•…Wave, Tidal and Ocean Current Technologies These technologies effectively utilize the potential energy (derived from wave height or tidal height differences) and kinetic energy (derived from water movement). Device technologies have four key features: 1. A stable platform or surface 2. A mobile working surface for the wave or current to work against 3. The mobile working surface must, at least partially, resist the wave or current action 4. The mobile working surface must be connected to some power take-off. Classification of wave energy devices can be made by consideration of the following characteristics: principle of operation, device location and mode of operation (Fig.€26.7; Falcão 2009). Names in bold refer to specific examples of devices in each class.
706
2VFLOODWLQJZDWHU FROXPQ ZLWKDLUWXUELQH
J. Huckerby
)L[HGVWUXFWXUH
,QEUHDNZDWHU6DNDWD0XWULNX )ORDWLQJ0LJKW\:KDOH2FHDQ(QHUJ\6SHUER\2FHDQOLQ[
)ORDWLQJ 2VFLOODWLQJERGLHV ZLWKK\GUDXOLFPRWRU K\GUDXOLFWXUELQHOLQHDU HOHFWULFDOJHQHUDWRU
(VVHQWLDOO\WUDQVODWLRQKHDYH $TXD%XR\ ,36%XR\)2:DYHERE3RZHU%XR\ (VVHQWLDOO\URWDWLRQ3HODPLV36)URJ6($5(9 (VVHQWLDOO\WUDQVODWLRQKHDYH $:6
6XEPHUJHG
2YHUWRSSLQJ ZLWKORZKHDG K\GUDXOLFWXUELQH
,VRODWHGB3LFR/,03(7
)L[HGVWUXFWXUH
5RWDWLRQERWWRPKLQJHG :DYH5ROOHU2\VWHU 6KRUHOLQHZLWKFRQFHQWUDWLRQ 7$3&+$1 ,QEUHDNZDWHUZLWKRXWFRQFHQWUDWLRQ 66*
)ORDWLQJVWUXFWXUHZLWKFRQFHQWUDWLRQ :DYH'UDJRQ
Fig. 26.7↜渀 Classification of wave energy devices. (Falcão 2009)
There are abundant publications with pictures of wave, tidal and other water current devices, almost all of which are conceptual and a few undergoing full-scale openocean deployments. Rather than duplicate these papers with pictures of devices, the reader is directed to the publications of the Executive Committee of the Ocean Energy Systems Implementing Agreement (OES-IA) and particularly the 2008 Annual Report (Brito-Melo and Bhuyan 2009). Further information on the range of wave, tidal and water current technologies can be found in the “Marine and Hydrokinetic Technology Database” of the United States Department of Energy (US DoE 2008). 26.4.1.2â•…Chemical Potential of Seawater Seawater has a higher salinity than all river water debouching into oceans. The opportunity to use this chemical potential to generate electricity was recognized in the nineteenth century but commercial technologies are still some way off. Nonetheless any major river entering the sea offers the potential for future deployment of salinity gradient technologies. There are two ways to extract energy from the salinity differences between river water and seawater: 1. Osmosis—the process is called Pressure Retarded Osmosis (PRO) 2. Reversed Electro-Dialysis (RED) PRO, sometimes called “osmotic power” exploits the chemical potential (i.e., salt concentration) between fresh water and seawater as pressure. Loeb developed
26â•… Marine Energy: Resources, Technologies, Research and Policies
707
Fig. 26.8↜渀 Operational principles of a PRO power plant. (Skråmestø and Skilhagen 2009)
the concept in the 1970s (Loeb and Norman 1975). Seawater and fresh water are brought together across semi-permeable membranes. The resultant pressure is in the range 24–26€bar, depending on the salt concentration in the seawater (Fig.€26.8). Filtering of both the seawater and fresh water are critical, as impurities easily reduce the efficiency of membranes. The world’s first pilot plant for PRO became operational at Tofte, Oslo Fjord, in SW Norway in October 2009. The plant, built and operated by Statkraft, combines river water and water from the fjord to produce up to 4€kW of electricity. Reverse electro-dialysis is a process, which utilizes chemical potential differences between two solutions, in this case seawater and fresh water brought into contact through an alternative series of anion and cation exchange membranes. The chemical potential generates a voltage over each membrane. This concept is being developed in a first prototype by Dutch researchers (Groeman and van den Ende 2007). 26.4.1.3â•…Heat Potential of Seawater The heat potential of seawater was recognized in the 1970s and is available in two forms:
708
J. Huckerby
1. Ocean Thermal Energy Conversion (OTEC) 2. Submarine Geothermal Energy OTEC technologies were first developed in the United States in the 1970s but languished after the oil price rises in the 1980s. OTEC takes deep ocean water, which tends to be at a steady temperature of c. 4°C and combines it—in a heat exchange process—with shallow surface water. The key component of the technology is the ‘cold water pipe’, usually a large-diameter (>1€m) plastic pipe, extending down for 1€km up which the deep cold ocean water is brought to the surface. Once at the surface, an open- or closed-cycle heat exchange process extracts heat energy, using a secondary fluid, such as ammonia (with a low boiling point) as the exchange fluid and converts it into mechanical energy (Fig.€26.9). Submarine geothermal energy could potentially be harnessed at those mid-ocean ridges, which are close to the surface and close to shore. Proposed technologies would be submarine heat exchange devices, which generate electricity on the seabed (Fig.€26.10; Hiriart 2008, personal communication). There are proposals to generate potentially drinking water on site to utilize its buoyancy relative to seawater to deliver the drinking water to a surface location.
26.4.1.4â•…Biological Production Various attempts have been made to develop technologies to harvest biomass from the sea for the production of biogas and biofuels (Brehany 1983). In the 1970s research in the United States focused on the harvesting of kelp but this languished in the 1980s as oil prices declined. More recently interest has shifted to the potential for open-ocean harvesting of marine algae for biofuels. The marine algae would essentially be ‘farmed’ by the chemical fertilization to enhance marine algae growth and concentration. At present there are no technologies capable of concentrating dispersed marine algae from their very low natural levels in the open ocean.
26.4.2 Predictability of Ocean Energy A key factor in the uptake of ocean energy will be predictability of produced energy (or water), as this will affect grid connections and the market price for electricity sold into local markets. Ocean currents, osmotic power, OTEC and submarine geothermal energy could potentially produce continuous, i.e., baseload, electricity, whilst tidal currents are forecastable for periods of days (with some modification due to weather). Even wave energy can be predicted 1–2 days in advance. All forms of ocean energy are less variable than wind energy.
26â•… Marine Energy: Resources, Technologies, Research and Policies
709
:$50 :$7(5 ,17$.( (;3$1',1*6(&21'$5< 9$325
&2/' :$7(5 287/(7
+($7(;&+$1*(56
785%,1(
(9$325$725
&21'(16(5
3803
6(&21'$5<)/8,'
+($7(;&+$1*(56
+($7(;&+$1*(56
3803
+($7(;&+$1*(56
3803 :$50 :$7(5 287/(7
&2/' :$7(5 ,17$.(
Fig. 26.9↜渀 Closed-cycle ocean thermal energy conversion process. (Charlier and Justus 1993)
Fig. 26.10↜渀 Submarine heat exchange unit. (Hiriart 2008)
710
J. Huckerby
26.5â•…Environmental Impacts of Ocean Energy Converters The successful development of ocean energy will depend on public acceptance of ocean energy technologies. Apart from competition for space and allocation of resources, which will be subject to regulatory interventions, there will be a requirement on ocean energy device developers to demonstrate that their technologies have limited impacts on the surrounding environment. Regulatory interventions will be necessary to ensure that project developers follow good environmental practice and avoid, eliminate, minimize or mitigate any unwanted environmental effects. All energy generation technologies have environmental impacts, including renewable technologies, which may have larger footprints than fossil fuel technologies, because the energy density of most forms of renewable energy is low compared to these fossil fuels. However, ocean energy is one of the more ‘dense’ renewable energy forms and space requirements may be limited, when compared to wind farms and photovoltaic (PV) arrays. The environmental impacts of ocean energy converters can be divided into physical, chemical and biological effects, which can occur at sea or on land.
26.5.1 General Environmental Effects The key effect will be occupation of sea space, particularly for ‘arrays’ or ‘farms’ of devices, a problem potentially exacerbated by regulatory requirements for exclusion zones to prevent other activities, e.g., fishing, particularly trawling, in the same area. Ocean energy projects will compete for access to sea space with pre-existing or pre-assigned uses, such as fishing quota zones, marine reserves and areas reserved for military use and shipping lanes. Hard structures, such as converter devices, moorings, anchors and export cables, can create collision hazards for vessels, marine mammals and fish. Rotating parts, such as turbine blade tips are a particular concern, although monitoring evidence from initial marine energy projects show that marine life tends to avoid installations and escape velocities exceed blade tip speed. Seabed and substratum disturbance is likely but the effects will be small, particularly for tidal current devices, which are likely to be located in areas of hard substratum, swept clear of sediment by the local currents. Epifaunal occupation of exposed parts of the devices or moorings, effectively new habitats created by the devices or moorings, can be of benefit (Langhamer 2007). Energy extraction from ocean energy devices is likely to be only a small proportion of the total energy flux incident on the devices. However, developers will be trying to maximize the energy capture to increase efficiency, so the effects of energy extraction—such as sediment drop-out and current modifications—need to be considered.
26â•… Marine Energy: Resources, Technologies, Research and Policies
711
26.5.1.1â•…Visual Impacts The visual impacts of ocean energy conversion systems are likely to be negligible in open sea situations. Embayments and natural harbours are more sensitive, because local communities tend to have a more proprietary view about enclosed sea space than they do of the open ocean. Although wave and tidal current devices may be deployed in arrays, most, if not all, of the devices, will be submerged. Floating components may be close to sea level and so invisible, when located offshore. Indeed hazard lighting may be required to ensure that passing vessels are aware of the obstructions. Onshore structures, such as shore-attached oscillating water column devices, tidal barrages, OTEC and osmotic plants may have to be located near but outside sensitive environmental areas. Some these plants will be located near or in builtup areas (e.g., at major river mouths) and the potential use of some structures for multiple purposes, e.g., use of barrages as roadways, may limit visual impacts of such buildings. 26.5.1.2â•…Noise and Vibration Construction and operational noise will be an important aspect. The effects of noise and vibration on humans will be negligible; the effects on marine biota are the subjects of research. Initial results encourage the view that effects are limited. Some device developers are considering the addition of acoustic ‘pingers’ to warn approaching marine mammals of obstructions caused by devices. Construction noise may be more problematic than operational noise, since high-volume transient noise, such as caused by pile-driving, is more likely to occur at this time.
26.5.1.3â•…Hydrodynamics Hydrodynamic considerations include seabed morphology and type, erosion and scouring, created by current modification by devices and new patterns of sediment transport and deposition. These are site-specific issues, which should be manageable for wave and tidal stream projects, since they will locate at high ambientenergy sites. Careful siting will be required for osmotic and fixed OTEC plants, particularly around their outlet pipe locations.
26.5.2 Chemical Impacts Chemical impacts can be related to water quality, since it is affected by the installation of exotic devices, particularly in larger numbers. The likely effects include cor-
712
J. Huckerby
rosion, particularly of metallic components of devices. A further problem is likely to be bio-fouling with a related problem, caused by anti-fouling practices. The extent of these problems is likely to be somewhat device- and site-specific but generally the problems are manageable with conventional (and increasingly environmentally sensitive) solutions developed by the wider marine industry.
26.5.3 Impacts on Marine Biota The effects of arrays of ocean energy converters on marine mammals, elasmobranchs (i.e., sharks) and other marine fauna are likely to be principally habitat modification, collision risk, noise and electromagnetic fields. 26.5.3.1â•…Hydrodynamics As noted above, placement of artificial structures on the seabed will modify local current patterns and cause scouring or sediment deposition. However, these effects are likely to be limited with respect to tidal or ocean current devices. In the case of tidal current devices, the ambient currents are likely to have scoured the seabed to create rocky substrates. Ocean currents move relatively slowly so modifications caused by placement of devices is likely to be limited. Wave devices are likely to have little impacts. All moorings and anchors may create (or even be encouraged to create) new habitats for some marine organisms.
26.5.3.2╅Collision Risk Although ship strikes by marine mammals are documented and up to 1/3rd of strandings in some area relate to such strikes, evidence indicates that such strikes are predominantly the result of fast-moving ships (>14€knots) of considerable size (>80€m). Fast-rotating propellers, which are actively putting energy into the water, are problematic. Most ocean energy devices are at fixed locations (allowing for diurnal tidal movements), do not have fast-moving parts and are relatively small. Careful siting of ocean energy device farms outside known migration routes should minimize the potential for collision.
26.5.3.3â•…Noise and Vibration Noise generated by ocean energy devices is likely to be limited and potentially not much above ambient noise. Rotating turbines may cause low-frequency noise, particularly if blade tips reach speeds fast enough to cause cavitation i.e., creation
26â•… Marine Energy: Resources, Technologies, Research and Policies
713
and explosive collapse of air bubbles at blade tips. Vibration caused by rotating machinery is likely to be limited.
26.5.3.4â•…Electromagnetic Fields Several marine species, including sharks and rays, use weak magnetic and electrical fields for navigation and prey location. Electro-sensitive species may be attracted to or repelled by these fields. Devices, such as offshore seismic survey cables, which emanate electrical and magnetic fields, can be targeted by some species, in the belief that they are prey. Appropriate cable selection and shielding technologies can mitigate the effects of these fields.
26.5.3.5â•…Summary The maritime and offshore oil and gas industries have long experience with locating fixed or moving structures in the marine environment. Ocean energy converters are somewhat different in that they are essentially long-term static installations, some of which have passively moving rotors and blades. Their likely effects on benthic and pelagic species need substantial research and experience from deployment. Extensive monitoring by early deployment projects, e.g., the tidal current projects in the East River of New York (Verdant Power 2009) and Strangford Lough, Northern Ireland (MCT 2009) have so far shown that interactions and effects between devices and native marine fauna are limited and not threatening.
26.6â•…Space and Resource Allocation Marine energy generation is a new use for sea space. The gradual development of commercial technologies to utilize the various properties of seawater, described in previous sections, will create both a requirement for occupation of sea space and a valuation of this activity against existing and other new uses. Many countries have space and resource allocation regimes for other uses, such as oil and gas exploration. The legislative frameworks for each resource may be quite different, particularly where the occupation is nominally permanent and irreversible (e.g., creation of a tidal barrage). The legislative requirements for each resource, whether it be creation of shipping lanes, nomination of marine reserves, fishing quotas or award of oil and gas exploration permits, are quite different. Indeed they are usually customized to the particular resource and may be modified to meet with international best practice. Marine energy best practice is still under development and analogies to related industries, such as offshore oil and gas or shipping may not be directly applicable.
714
J. Huckerby
Some countries are therefore developing new legislative and regulatory practices to acknowledge the particular, if not unique, qualities of marine energy. The valuation of space for marine energy projects, as compared to other potential or actual uses of sea space, is in its infancy. So far competitive issues have not been too great but this probably reflects the relatively few deployments of marine energy converters that have taken place. Most projects have had the luxury of being granted space or access to marine energy resources on a ‘first come, first served’ basis. This will become less satisfactory as competition for space or particular resources becomes an issue. The United Kingdom’s Marine Estate has already held its first competitive bid round for marine energy permits in the Pentland Firth of NE Scotland (Crown Estate 2009). The United States has also put in place a permitting scheme administered by both the Minerals Management Service (MMS) and FERC, the Federal Energy Regulatory Commission (FERC 2009). Regulatory authorities have yet to establish a single valuation methodology for marine energy. Perhaps the most attractive approach will be a ‘best use solution’. This approach has been proposed for the management of onshore fresh water resources in New Zealand (NZBCSD 2008). The ‘best use solution’ uses a mixed statutory planning and market-based approach. The objective is to manage water (or potentially sea space and resources) in an integrated and sustainable way, taking into account all potential uses and users. Some countries have begun to use an integrated approach to allocation of offshore space and resources, called ‘Marine Spatial Planning’, which is defined as a process of allocating the spatial and temporal distribution of human activities in marine areas to achieve a range of environmental, economic and social objectives.
26.7â•…Political Framework for Ocean Energy Governments have a number of policy options for the promotion and acceleration and uptake of ocean energy. Although investment in ocean energy technology development is undertaken by a spectrum of organizations—from small-medium enterprises with a conceptual idea to major international utilities or energy companies with seed investments—it is government support that is driving the development of ocean energy. Recent major changes in international policy are favourable for ocean energy. The current proposals to replace the Kyoto Protocol with another binding treaty to include greater involvement of developing nations is driving national governments to consider renewable energy and energy efficiency initiatives. The development of emissions trading regimes, emissions reduction targets and potential implementation of carbon taxes are ‘levelling the playing field’ between conventional fossil fuel use and new renewables. These global initiatives are supportive but marine energy will only flourish in countries, where dedicated marine energy policy instruments are used to promote its uptake. Such dedicated instruments have been implemented in NW European countries to promote solar PV installations (in Germany and Spain) and wind energy.
26â•… Marine Energy: Resources, Technologies, Research and Policies
715
The same NW European countries (Scotland, United Kingdom, Ireland, Portugal and Spain) are leading the way with respect to marine energy. The key policy instruments that are influential in promoting marine energy are: 1. Legislated or aspirational targets for installed capacity of generation contribution from ocean energy 2. Government funding (from R & D to production incentives) 3. Infrastructure developments 4. Other incentives (Table€26.1)
26.7.1 Lifecycle Incentives for Ocean Energy No country offers all of the incentives outlined in Table€26.1, although Scotland and the United Kingdom come close (Scotland has its own set of incentives, separate from the rest of the UK). The governments in these countries have recognized the potential for marine energy as an energy supply option and potential export opportunity. Additionally they have understood the need to provide a development path for an industry, drawn from other disparate industries, which require an integrated set of policy incentives to promote involvement throughout the supply chain. The policies should also continue to provide incentives, as the industry matures. The introduction of production incentives in Scotland and the United Kingdom demonstrates in the increasing maturity of wave and tidal stream technologies being developed there. Similarly the international spread of marine energy testing centres and participation in standards development (see next section) indicate the development of an international industry.
26.7.2 International Initiatives for Ocean Energy At least 30 countries have active developments in marine energy, ranging from individual inventors pursuing their own concepts by prototype modeling to major government initiatives to develop multi-MW tidal barrages (e.g., in the United Kingdom and Russia). Whilst the NW European coastal countries have led developments since the 1970s, activities have spread around the world and some of the largest developments are planned in the N.W. Pacific, e.g., the 254€MW Sihwa tidal barrage, which will become operational in Korea in June 2010.
26.7.3 International Initiatives There are a number of regional and international initiatives for promotion and development of ocean energy.
716
J. Huckerby
Table 26.1↜渀 Government policy instruments for ocean energy
26.7.3.1â•…Ocean Energy Systems Implementing Agreement (OES-IA) The OES-IA is an inter-governmental initiative under the auspices of the International Energy Agency (IEA) in Paris. It presently has 16 member governments, who send representatives to the Executive Committee (ExCo). Australia joined the OES-IA in 2009 and Korea, South Africa and France are due to join early in 2010.
26â•… Marine Energy: Resources, Technologies, Research and Policies
717
The OES-IA ExCo meets twice a year to lead work programs, which will promote and accelerate the uptake of ocean energy. The Committee commissions Annexes, which are separate optional work programs, in which national governments can choose to participate, on specific issues. Presently, there are Annexes on: 1. Open-sea testing protocols 2. Grid connection of marine energy converters 3. Environmental impacts of marine energy converters The OES-IA publishes newsletters and annual reports as well as technical reports based the Annex work programs. These are all publicly available at http://www. iea-oceans.org.
26.7.3.2â•…IEC’s Technical Committee 114 The International Electrotechnical Commission, based in Geneva, has set standards for electrical, electronic and electromechanical equipment for over 100 years (http://www.iec.ch). In 2007 it decided to establish a Technical Committee (TC114) to establish standards for wave, tidal and other water current energy converters. TC114 currently comprises representatives for 16 country governments and is developing technical specifications, the precursors for standards, on the following subjects: 1. Marine energy terminology 2. Wave Device performance 3. Tidal stream device performance 4. Design criteria for marine energy converters 5. Wave and tidal energy resource characterization and assessment The first of these technical specifications is likely to be published in mid-late 2012.
26.7.4 European Initiatives 26.7.4.1â•…EquiMar and Predecessors The Equitable Testing and Evaluation of Marine Energy Extraction Devices in terms of Performance, Cost and Environmental Impact (EquMar) is a European Commission-funded consortium program with 22 partners, ranging from device developers to university researchers (http://www.equimar.org). The program is led by the University of Edinburgh. The purpose of EquiMar is to deliver a series of highlevel and detailed protocols for the equitable evaluation of marine energy converters. The project was commissioned in April 2008 and draft protocols are presently available. The project will run for three years and is on track to deliver final outputs by April 2011.
718
J. Huckerby
EquiMar follows early European Commission-funded research projects, like the Co-ordinated Action on Ocean Energy (CA-OE; http://www.ca-oe.net/home. htm) and WAVETRAIN, an initiative to train postgraduate students in ocean energy (http://www.wavetrain2.eu). 26.7.4.2â•…Waveplam The WAVe Energy PLAnning and Marketing project (Waveplam) is another European Commission-funded consortium program with eight partners developing tools, methods and standards to speed up the introduction of ocean energy into the renewable energy market (http://www.waveplam.eu). The project consortium includes European research organizers and device developers, who aim to address non-technological barriers to the establishment of ocean energy.
26.8â•…Trends and Growth in Ocean Energy The year 2008 was an important one for ocean energy. The world’s first ‘pre-commercial tidal demonstrator’, the Marine Current Turbines’ SeaGen tidal generator, began to feed electricity into the Northern Ireland Grid (Fig.€26.11a). Shortly afterwards, the world’s first wave farm array (of three Pelamis devices) became operational at Aguçadoura in northern Portugal (Fig.€26.11b).
Fig. 26.11↜渀 Recent marine energy deployments. a MCT’s SeaGen pre-commercial tidal demonstrator (Source: http://www.marineturbines.com/21/technology/), and b Pelamis Wavepower’s 3â•›×â•›750€ kW Pelamis array at Aguçadoura (Source: http://www.pelamiswave.com/content. php?id=149)
26â•… Marine Energy: Resources, Technologies, Research and Policies
719
There have been fewer deployments in 2009, perhaps the most notable being the deployment of the Aquamarine Oyster surge device at the European Marine Energy Centre in the Orkney Islands. A number of major energy companies (Total, Chevron) and utilites (RWE, Statkraft, Vattenfall and Fortum) have invested in ocean energy device or project developments and the venture capital community has remained involved. Statkraft, the Norwegian transmission system operator and generator, opened the world’s first prototype osmotic power plant. The US Department of Energy continued its investment in R & D projects. That funding covers a range of projects and, in 2009, includes funding for accelerated market developments. Some of the 2009 funding is dedicated to a rejuvenation of research into ocean thermal energy conversion. Other governments continue to support R & D projects and device developments, with a growing focus on providing energy for desalination or the direct production of drinking water from ocean energy. The Scottish Executive offered the first prize for ocean energy, called the Saltire Prize (GBP 10€million), to be awarded to the first commercially viable wave or tidal stream technology to generate more than 100€GWh of electricity over a continuous 2-year period. Undoubtedly, investments and developments in ocean energy have been affected by the world’s economic situation since the middle of 2007. As the world’s economies recover during 2010, activities deferred during 2008 should be resurgent. The growing numbers of device developments and international testing centres should lead to an acceleration and maturation of technology development to the first commercial devices. For the nascent technologies, such as OTEC and osmotic power, recent R & D and prototype investments should lead to more concrete developments in coming years. Lastly, a number of countries and organizations have proposed targets for installed generation capacity from ocean energy. Forecasts made in the early 2000s (e.g., Scottish Executive 2004) have proven too optimistic but ocean energy capacity is now growing. Presently, the total capacity—from all forms of ocean energy—is relatively small (c. 300€MW) with the largest contribution coming from the 240€MW La Rance Tidal Barrage in northern France. However, this total will almost double in 2011, when the 254€MW Sihwa barrage in Korea comes on stream.
References Alcocer SM, Hiriart G (2008) An applied research program on water desalination with renewable energies. Am J Environ Sci 4(3):190–197 Alves O, Hudson D, Balmaseda M, Shi L (2010) Seasonal and decadal prediction. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. Springer, Dordrecht Brehany JJ (1983) Economic and systems assessment of the concept of nearshore kelp farming for methane production. Parsons Co. and Gas Research Institute, Technical Report PB-82-222158 Brito-Melo A, Bhuyan G (eds) (2009) 2008 Annual Report of the International Energy Agency Implementing Agreement on Ocean Energy Systems (IEA-OES), February 2009 Charlier RH, Justus JR (1993) Ocean energies: environmental, economic and technological aspects of alternative power sources. Elsevier Oceanography Series. Elsevier, Amsterdam
720
J. Huckerby
Cornett AM (2008) A global wave energy resource assessment. Annual international offshore and polar engineering conference, Vancouver, BC, ISOPE-2008-579 Crown Estate (2009) Details of Pentland Firth BIDS Announced. Crown Estate press release, 8 June 2009. http://www.thecrownestate.co.uk/newscontent/92-pentland-firth-tidal-energyproject-3.htm. Accessed 16 Dec 2009 Dombrowsky E, Bertino L, Brassington GB, Chassignet EP, Davidson F, Hurlburt HE, Kamachi M, Lee T, Martin MJ, Mei S, Tonani M (2009) GODAE systems in operation. Oceanography 22(3):80–95 Falcão AF deO (2009) The development of wave energy utilization. In: Brito-Melo A, Bhuyan G (eds) 2008 Annual report of the Ocean Energy Systems implementing agreement. Lisbon, February 2009 FERC (2009) MMS/FERC guidance on regulation of hydrokinetic energy projects on the OCS. Federal Energy Regulatory Commission, 24 April 2009. http://www.ferc.gov/industries/hydropower/indus-act/hydrokinetics/pdf/mms080309.pdf. Accessed 16 Dec 2009 Gaba E (2009) World map in English showing the divergent plate boundaries (OSR—Oceanic Spreading Ridges) and recent sub aerial volcanoes. http://en.wikipedia.org/wiki/ File:Spreading_ridges_volcanoes_map-en.svg. Accessed 15 Dec 2009 Greenslade D, Tolman H (2010) Surface waves. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st century. Springer, Dordrecht Groeman F, van den Ende K (2007) Blue energy. Leonardo Energy. www.leonardo-energy.org Hiriart G (2008) Hydrothermal vents. PowerPoint presentation, November 2008 Jalihal P, Kathiroli S (2009) Utilization of ocean energy for producing drinking water. In: BritoMelo A, Bhuyan G (eds) 2008 Annual report of the Ocean Energy Systems implementing agreement. Lisbon, February 2009 Langhamer O (2007) Colonization of wave power device foundations by invertebrates. In: National Renewable Energy Laboratory and Natural Resources Canada (eds) IEA-OES workshop: potential environmental impacts of ocean energy devices: meeting summary report. Messina, Italy, 18 Oct 2007 Loeb S, Norman RS (1975) Osmotic power plants. Science 189:654–655 MCT (2009) www.marinecurrentturbines.com NASA (2009) Map of average global sea surface salinity. http://aquarius.nasa.gov/educationsalinity.html. Accessed 16 Dec 2009 Nihous G (2009) Ocean thermal energy conversion (OTEC) an derivative technologies: status of development and prospects. In: Brito-Melo A, Bhuyan G (eds) 2008 Annual report of the Ocean Energy Systems implementing agreement. Lisbon, February 2009 Nihous GC (2010) Mapping available Ocean Thermal Energy Conversion Resources around the main Hawaiian Island with state-of-the-art tools J Renew Sustain Energy 2, 043104 NOAA (2008) Map of major surface ocean currents. http://www.adp.noaa.gov/currents.jpg. Accessed 15 Dec 2009 NZBCSD (2008) A best use solution for New Zealand’s water problems. New Zealand Business Council for Sustainable Development, Auckland, August 2008 Ray R (2007) Scientific visualization studio, and television production NASA-TV/GSFC, NASAGSFC, NASA-JPL. http://en.wikipedia.org/wiki/Amphidromic_point. Accessed 25 Nov 2009 Scottish Executive (2004) Harnessing Scotland’s marine energy potential: Marine Energy Group (MEG) report 2004. Report by the Forum for Renewable Energy Development in Scotland Skråmestø OS and Skilhagen SE (2009) Status of technologies of harnessing salinity power and the current osmotic power activities. In: Brito-Melo A, Bhuyan G (eds) 2008 annual report of the Ocean Energy Systems implementing agreement. Lisbon, February 2009 Soerensen HC, Weinstein A (2008) Ocean energy: position paper for IPCC. Key note paper for the IPCC scoping conference on renewable energy. Lubeck, Germany. http://www.eu-oea.com/ euoea/files/ccLibraryFiles/Filename/000000000400/OceanEnergyIPCCfinal.pdf US DoE (2008) Marine and hydrokinetic technology database. http://www1.eere.energy.gov/ windandhydro/hydrokinetic/default.aspx. Accessed 15 Dec 2009 Verdant Power (2009) www.verdantpower.com
Chapter 27
Application of Ocean Observations & Analysis: The CETO Wave Energy Project Laurence D. Mann
Abstract╇ The latest full-scale version of the CETO wave energy converter (WEC) is described, along with its principle of operation, key features and site selection. At the time of writing, a full-scale prototype test site was under development at a coastal site approximately 37€km to the south west. Some pragmatic issues pertaining to the use of global wave model data and in-situ observations are discussed in the context of this commercial venture.
27.1â•…Hardware Overview The CETO wave energy converter is shown schematically in Fig.€27.1. Submerged buoys are connected to pumps that are tethered to the seabed in an array. As a wave disturbance passes overhead the buoys are heaved upwards and exert tension on the tethers forcing the pistons inside the pumps to move upwards-expelling fluid at high pressure. The high-pressure fluid, usually water, is piped into a manifold where it moves to shore. The pressurised water may be used to drive a turbine directly for electricity production or for production of desalinated water, or a combination of both. CETO thus distinguishes itself from other WEC’s in that the output of the offshore plant is not electricity but rather pressurised fluid. Energy conversion from hydraulic to electric takes place onshore with standard off-the-shelf plant- Pelton, or similar high-head turbines coupled to electric generators. CETO wave energy converters may be understood better when compared to a current snapshot of the competitors, as shown schematically in Fig.€ 27.2. Wave energy converters such as OPT’s Powerbuoy1 and Oceanlinx2 that are on the sur1╇ 2╇
OPT website: www.oceanpowertechnologies.com. OCEANLINX website: www.oceanlinx.com.
L. D. Mann () Carnegie Wave Energy Limited, Level 1, 16 Ord Street West Perth, WA, Australia 6005 Perth e-mail:
[email protected] A. Schiller, G. B. Brassington (eds.), Operational Oceanography in the 21st Century, DOI 10.1007/978-94-007-0332-2_27, ©Â€Springer Science+Business Media B.V. 2011
721
722
Fig. 27.1↜渀 CETO Schematic
Fig. 27.2↜渀 Schematic comparison of CETO with other WEC’s
L. D. Mann
27â•… Application of Ocean Observations & Analysis
723
face of the water are exposed to breaking waves during large seas, for example, in storms, whereas the CETO device and the AWS buoy3 are fully submerged during normal operation so that they are significantly less prone to damage than floating devices. Pelamis is a surface going device but is designed to tolerate very large seas4. Another metric for comparison of WEC’s is the simplicity of the power generation scheme. CETO, Oyster5 and the shore-based Limpet6 have the electricity generation equipment onshore rather than in the water and this leads to greater simplicity and should translate to better long-term reliability. Also, onshore electrical generation plant may be upgraded or modified without recourse to removal of the WEC’s from the water. Devices that are fully submerged in normal sea states, including CETO, are not able to access the full energy flux of the waves as can surface mounted devices such as OPT’s Powerbuoy. This penalty in energy capture efficiency may be offset by reduced operating costs in the form of maintenance, as submerged devices are subject to a lower rate of occurrence of breaking waves and repetitive stress loading compared to surface devices. CETO will typically be deployed up to several hundred metres from shore so the pipeline lengths are typically greater than those of the other hydraulic wave energy converter, the OYSTER, which operates in the breaking wave zone much closer to shore. This means that the CETO balance of plant design must pay close attention to the optimisation of the energy losses and minimise the cost of piping. Fortunately this is made easier by the selection of very high operating fluid pressures of around 7€ MPa which enables the hydraulic design of CETO to obtain acceptable piping losses in smaller diameter, and therefore cheaper, piping to shore.
27.2╅Installation Water Depth The CETO units are designed to operate in shallow waters of between 20 and 50€m depth. At these depths there is significant energy loss as deep-water swell waves propagate shoreward and lose energy due to friction with the seabed. Nevertheless there is still an appreciable wave resource available on the southern Australian coastline even after these losses are taken into consideration. The advantage of operating in relatively shallow waters is that sites are typically no more than two kilometres from shore and often significantly closer than that, helping to keep costs of piping fluid to shore within acceptable limits.
AWS website: www.awsocean.com. Pelamis website: www.pelamiswave.com. 5╇ OYSTER website: www.aquamarinepower.com. 6╇ LIMPET website: www.wavegen.co.uk. 3╇ 4╇
724
L. D. Mann
27.3â•…Prospective Site Identification The identification of prospective sites for the installation of wave energy convertors requires estimates of wave climate. In this context the wave climate constitutes ‘the resource’. Site selection involves not only the resource, of course, but also takes into account the overlays of onshore grid connectivity and competing ocean usage (for example state and federal marine parks). Based on these and other considerations, Carnegie Wave Energy Limited has selected, applied for, and obtained, wave exploration licenses in selected areas off the coast of Western Australia, South Australia, Victoria and Tasmania. In addition to this site-specific work, RPS METOCEAN was commissioned to produce broader estimates of the total deep water and shallow water wave resource for the southern Australian coastline. The estimated total deep-water wave resource for this coastline is 525€GW and the estimated shallow water resource is 171€GW. This shallow water estimate is still about three times greater than Australia’s national electricity consumption.
27.4â•…First Installation Site: Garden Island, WA The first full scale CETO devices will be deployed in an area near Garden Island off the coast of Western Australia as in Fig.€27.3. This is ultimately the planned site for a 5€MW wave farm comprising multiple CETO units that is expected to be one of the first (if not the first) commercial scale wave farm in the southern hemisphere. Grid connected power is expected to be available by 2012. The site is located in the Sepia Depression, an area of 20–25€m water depth between Five Fathom Bank and Garden Island. Garden Island houses HMAS Stirling, which is the Royal Australian Navy’s largest base. A memorandum of understanding has been signed with the Australian Department of Defence for collaboration, onshore space and potential power off take. The site is connectible into the Western Australian transmission grid (SWIS) via the facilities at Garden Island. The site was chosen because of its proximity to the port of Fremantle and marine support facilities in and around Cockburn Sound, as well as the ready access to the SWIS and the population centre of Perth. The wave resource isâ•›>â•›35€ MW/km on site with approximately 65% availability of 2€m waves and 90% availability of 1€m waves. The presence of Five Fathom Bank provides some sheltering from excessive swell waves while still maintaining a viable wave climate for a commercial wave farm. Significant wave height (Hs) inside the bank is limited to 4€ m, compared to 8€m outside, due to the attenuation effect of bank. The Sepia Depression location has a similar average wave climate to a fully exposed location outside of Five
27â•… Application of Ocean Observations & Analysis
725
Fig. 27.3↜渀 Garden Island site
Fathom Bank. Another advantage of the Sepia Depression is that it has a greater predictability of the sea state due to the sheltering effect and this translates to increased site accessibility. The first phase of full scale insitu evaluation will involve a single autonomous CETO unit. This installation will be used to gather performance and reliability data as well as validate storm survivability measures. A Waverider® buoy is anchored adjacent to the CETO mooring, and both the Waverider® data and the outputs from the WEC will be beamed back to shore via wireless link. Energy produced by the unit will be safely dispersed as heat into the surrounding seawater.
726
L. D. Mann
27.5â•…Specific Uses of Ocean Observations & Analysis The CETO Wave Energy Project has benefited from the vast store of oceanographic data, observations and analysis throughout the course of its development to date, and will continue to draw on the knowledge base as projects are developed worldwide. At this stage of project development, the utility of ocean observations and analysis has been restricted to site selection, validation and calibration studies. It is noted however that operational forecast products are expected to become increasingly relevant at the later stages of commercialisation. Good reviews relevant to the wider application of ocean observations and analysis to wave energy forecasting can be found in the papers of Moreira et€al. (2002); Bruck and Pontes (2006) and Tolman et€al. (2002) and Greenslade and Tolman (2010). Specifically to the CETO project, in 2003 Carnegie commissioned a survey of near-shore wave energy resource availability from a local oceanographic company WNI, with geographical scope restricted to the SW coastal areas of Western Australia from Geraldton around to Esperance. The survey was based on an in-house data set comprising parameterized data from an implementation of NOAA Wave Watch III (NWW3). The data covered the period January-1997 to August-2003, at 3 hourly intervals with spatial resolution of 1° by 1.25°. As a deep-water model, this data was considered relevant only at depths greater than approximately 50€m which in turn restricted attention to a subset of only 8 grid points within the geographic scope. Fortunately, these remaining 8 points corresponded or overlapped with areas along the WA coast that Carnegie had flagged as being of interest. However, this does highlight the limitations of using such a coarse grid to generate data. A finer grid of say 0.25° by 0.25° would have been desirable to account for some of the protection provided headlands, for example Cape Naturaliste. Again it was fortunate in this case that most of the swell waves that impact the south west coast of Western Australia are from the SW direction and not from the S or SSW so the shadowing effects of the land were not as severe as they might have been in other locations. More recently, Carnegie commissioned an independent report from ocean resource specialists RPS MetOcean, to provide an independent assessment of the near-shore wave energy resource availability at 17 potential development sites along the southern coastline of Australia. Wave data was sourced primarily from an implementation of NOAA Wave Watch III (NWW3) and compared to available measured data for seven sites across southern Australia for verification purposes and to examine localised effects on wave power and its availability. This study indicated that Australia has a potential near-shore wave energy resource of approximately 170,000€MW in water depths of 25€m (Fig.€27.4). This equates to approximately 4 times the total amount of installed power generation capacity nationally. The shallow water wave estimate represents the potentially available resource only and does not take into account the efficiency of extraction by a wave energy conversion device or accessibility of the wave resource. The issues inherent in the 2003 wave modelling exercise were recognised and addressed by this study. Specifically, points of interest were selected where, because of the bathymetry, the model was
27â•… Application of Ocean Observations & Analysis
727
Fig. 27.4↜渀 Wave resource estimates produced by Carnegie and RPS METOCEAN
able to deliver reliable deep-water results. Also points of interest were chosen only from gridded data points where there were no land shadowing effects. Thus, this later report represents a better estimate of the shallow water resource for Southern Australia. In summary, the result was an indicative ratio between the total available deep water and shallow water resource of the order of 3:1 (525:170€GW).
27.6â•…Limitations to the Use of Model Data Neither of the modelling exercises discussed above considered mesoscale effects such as sea breezes, and these are usually low pass filtered out of WWW3 raw data. As a result the modelled wave climate exclude higher frequency signals such that only swell states with wave periods typicallyâ•›≥â•›8€sec are represented. The experience of using model data provided interesting insights into the limitations of such data for determining suitable sites for wave energy converters. For example it was recognised that ‘mapping in’ of deepwater ocean data using computational algorithms taking into account bathymetry, shoaling effects and coastline morphology, would be useful for some locations but less effective for others. Importantly from a commercial perspective, is the consideration of diminishing returns with respect to the expense and quality of data. This trade-off between the increasing cost of modelling and the resultant highly processed information was
728
L. D. Mann
also influenced by the fact that a tri-axis wave accelerometer could be purchased for approximately the same cost as an extensive mesoscale wave data analysis. There the choice becomes one between having actual real data at site (but having to wait several months to collect a representative statistical wave data set) versus having highly interpreted and therefore slightly suspect data (but having it representing a longer epoch).
27.7â•…Pragmatic Approaches in a Commercial Context Carnegie in its development of CETO wave sites worldwide generally favours the approach of using coarse gridded WWW3 data as a guide to determining general feasibility, then making detailed selections based on deployed tri-axis wave measurement accelerometers, rather than ‘gridding in’ coarser wave data to finer scales. This approach works in practice because at scales finer than the typical WWW3 grid, the decision about the most suitable site for a wave energy converter is no longer solely dependent on wave resource alone; other considerations such as land and seabed access, and access to onshore grid connections come in to play. This pragmatic selection of sites, overriding purely wave resource factors, is evidenced in the selection of the Sepia Depression site discussed earlier. Here the site was selected partially for its sheltering ability but mostly for convenience and access. Another aspect of wave energy converter design that CETO, and indeed all devices, must address is: how adaptable the design is to accommodate the actual dynamic range of wave heights that are expected at locations where they are going to be deployed. The distribution of wave height along with the maximum wave height at a given site must be known in order to match the design to the site. In practice, the gathering of this detailed information could be an expensive and time consuming process if there is not already wave buoy data available at the exact location of deployment, or if surveys and analysis have not already been carried out. The verification of the operation of CETO at a technical level will involve the comparison of empirical measurement with the convolution output of the device power matrix and the wave matrix for the Sepia Depression site. This process allows the actual capacity factor to be compared with that predicted from the convolution of these two matrices, and is the key to commercial validation. It is important to note that the tools such as WWW3, while useful for site selection, are not in themselves sufficient to predict the energy output of CETO or of any other wave energy converter for that matter. This is because all wave energy converters around the world have not yet accumulated enough operation data to provide a simple predictor of integrated energy output based on historical or hindcast wave data statistics. This will emerge over years to come, but for now all wave energy converters will need to demonstrate ‘bankability’; that is, a sufficiently high average capacity factor for the particular wave farm to present a favourable return on investment.
27â•… Application of Ocean Observations & Analysis
729
Fig. 27.5↜渀 Results of wave resource analysis produced by Carnegie and commissioned from RPS METOCEAN for selected locations along the southern Australian coastline
Until sufficient operational data exists, most wave farms, including CETO, will be in this bankability demonstration mode. Furthermore, such demonstration in practice requires a wave buoy on site at the wave farm to correlate the input wave state with the output of the wave energy converter. Acknowledgments╇ Carnegie acknowledges the spport of the Western Australian government through their LEED funding program, which partially supports this work. RPS METOCEAN is acknowledged for providing the wave data and Mr. Tim Sawyer of Carnegie for the preparation and analysis of the data presented in Figs.€27.4 and 27.5.
References Bruck M, Pontes MT (2006) Wave energy resource assessment based on satellite data. Workshop on performance monitoring of ocean energy systems. http://pmoes.ineti.pt. Lisbon, Nov 2006 Greenslade D, Tolman H (2010) Surface waves. In: Schiller A, Brassington GB (eds) Operational oceanography in the 21st Century. Springer, New York Moreira NM, Oliveira Pires H, Pontes T, e Câmara C (2002) Verification of TOPEX-Poseidon wave data against buoys off the West Coast of Portugal. Proceeding. Conference on Offshore Mechanics and Arctic Engineering (OMAE02), paper 2002–28254. Oslo, Norway, 23–28 June 2002 Tolman HL, Balasubramaniyan B, Burroughs LD, Chalikov DV, Chao YY, Chen HS, Gerald VM (2002) Development and implementation of wind generated ocean surface wave models at NCEP. Weather Forecast 17:311–333