Use of High Performance Computing in Meteorology
This page intentionally left blank
Proceedings of the Twelfth ECMWF Workshop
Use of High Performance Computing in Meteorology Reading, UK
30 October - 3 November 2006
Edited by
George Mozdzynski European Centre for Medium-Range Weather Forecasts, UK
1S: World Scientific N E W JERSEY
*
LONOON
*
SINGAPORE
*
BElJlNG
*
SHANGHAI
*
HONG KONG
TAIPEI
CHENNAI
Published by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA ofice; 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK officer 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library
USE OF HIGH PERFORMANCE COMPUTING IN METEOROLOGY Proceedings of the Twelfth ECMWF Workshop Copyright 0 2007 by World Scientific Publishing Co. Pte. Ltd All rights reserved. This book, or parts there05 may not be reproduced in any form or by any means, electronic or mechnnirnl, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive. Danvers, MA 01923, IJSA. ln this case permission to photocopy is not required from the publisher.
ISBN- 13 978-98 1-277-588-7 ISBN- 10 98 1-277-588-9
Printed by Fulhland Offset Printing ( S ) Ptc Ltd, Singapore
PREFACE
The twelfth workshop in the series on the “Use of High Performance Computing in Meteorology” was held the week of 30th October to 3rd November 2006 at the European Centre for Medium Range Weather Forecasts, in Reading, UK. This workshop received talks mainly from meteorological scientists, computer scientists and computer manufacturers with the purpose of sharing their experience and stimulating discussion. Presentations from this workshop can be found at, http://www. ecmwt int/newsevents/meetings/workshops/2006/highgerformance_ computing-I2tWpresentations.html High performance computing in meteorology continues to demand the fastest commercially available computers with thousands of scalar processors or hundreds of vector processors. In 1982 ECMWF’s first Cray-1 computer achieved a sustained performance of 50 Mflops. Today the sustained performance is 4 Teraflops averaged over the key applications at ECMWF, an increase of 80,000 in some 25 years. Will computer vendors continue to pull off the magic of the past 25 years and deliver systems 80,000 times faster in the next 25 years? Or will power consumption ultimately constrain performance by then?
On a more practical level, will our applications be able to run efficiently on computers that have just ten times the number of processors that we usc today? During the week of this workshop a number of talks considered these issues while others presented on areas of Linux clusters, parallel algorithms and updates from meteorological organisations. The papers in these proceedings present the state of the art in the use of parallel processors in the fields of meteorology, climatology and oceanography. George Mozdzynski
V
This page intentionally left blank
CONTENTS
Preface
V
Computational Efficiency of the ECMWF Forecasting System
1
Deborah Salmond
The NERSC Experience Implementing a Facility Wide High Performance Filesystem William T. C.Kramer
12
Recent Developments of NCEP GFS for High Performance Computing
25
Hann-Ming Henry Juang
Multi-Scale Coupled Atmosphere-Ocean GCM and Simulations
36
Keiko Takuhashi
HPC Activities in the Earth System Research Laboratory
55
M. Govett, J. Middlecofl D. Schaffer and J. Smith
Computational Cost of CPTEC AGCM
65
J. Panetta, S. R. M. Burros, J. P. Bonatti, S.S. Tomita and P. Y. Kubota
Large-Scale Computational Scientific and Engineering Code Development and Production Workflows
84
D. E. Post and R. P. Kendall
Progress with the GEMS Project
111
Anthony Hollingsworth
Variational Kalman Filtering on Parallel Computers
121
H. Auvinen, H. Haario and T. Kauranne
Preparing the COSMO-MODEL for Next Generation Regional Weather Forecasting and Computing
137
U. Schiittler, E. Krenzien and H. Weber
A New Partitioning Approach for ECMWF’s Integrated Forecasting System (IFS) George Mozdzynski
vii
148
viii
What SMT can do for you
167
John Hague
Efficient Coupling of Iterative Models R. W. Ford, G. D. Riley and C. W. Armstrong
178
Tools, Trends and Techniques for Developers of Scientific Software
191
Thomas Clurie and Brice Wornuck
Analytic MPI Scalability on a Flat Switch George W. Vandenberghe
207
Initial Performance Comparison of the IBM p5-575 and the CRAY XT3 using Application Benchmarks
219
Mike Ashworth and Graham Fletcher
Role of Precision in Meteorological Computing: A Study Using the NMITLI Varsha GCM T. N . Venkatesh and U. N . Sinha
237
Panel Experience on Using High Performance Computing in Meteorology
252
George Mozdzynski
List of Participants
259
COMPUTATIONAL EFFICIENCY OF THE ECMWF FORECASTING SYSTEM DEBORAH SALMOND ECMWF This paper describes the high performance computer systems currently installed at ECMWF and shows the scalability and percentage of the peak performance achieved by the Integrated Forecasting System (IFS) on these systems.
1. Current HPC systems at ECMWF In autumn 2006 ECMWF installed the first of the two clusters of Phase 4 of the IBM supercomputer contract. This is an IBM p575+ cluster with 155 nodes. This was to replace the Phase 3 systems installed in mid-2004. For the Phase 4 system each node has 8 dual-core 1.9 GHz Power5+ processors with a possible peak performance of 7.6 Gflops per core. In this paper we will refer to a core as a PE (Processing Element). The total number of PEs per cluster is 2480, but the Phase 4 system has Simultaneous Multi-Threading (SMT) available so that 2 threads can be assigned to each PE - this gives a total of 4960 parallel threads available per cluster. On each node there are 32 Gbytes of memory shared between the 16 PEs. Figure 1 illustrates the differences between the Phase 3 and Phase 4 systems. The Phase 3 and Phase 4 systems have a similar number of PEs, each with the same peak performance and a Federation switch interconnect between the nodes with the same bandwidth of 2 Gbytes/s in each direction. However, the new SMT available with the Phase 4 system together with 3 times the aggregate memory bandwidth per PE - means that up to twice the performance can be achieved when comparing a run of IFS on Phase 3 and Phase 4, as will be shown in the following sections. The Phase 4 systems were named hpce and hpcf to follow from the Phase 3 systems which were called hpcc and hpcd.
1
2
+
Phase3
hpce & hpcf
htxc & htxd
PE
Phase4
1
5 lsler
PE
i PEs per nods --a 3*Mm BW par
PE I
Figure 1: Differences between Phase 3 and Phase 4 IBM systems
2. Operational Schedule at ECMWF The operational forecasting suite is run on hpce twice a day to give OOZ and 122 forecasts. The 4D-Var data assimilation is run with long and short data cut-offs the long data cut-off run taking longer as it covers a 12 hour rather than a 6 hour observation window. While the data assimilation is running, only 24 nodes are being used for operations. When the data assimilation has finished the 10-day forecast and EPS forecasts can start - these are run in parallel to fill the whole cluster. The 122 run starts at 14:OO and finishes at about 19:OO with the schedule as follows:
Start 14:OO 4D-Var data assimilation T799 I T95 I T255 L91 14:15 - 15:30 run on 24 Nodes - long data cut-off, 12 hour window 16:15 - 1 6 5 5 run on 24 Nodes - short data cut-off, 6 hour window 10-Day forecast - T799 L91 1 6 5 5 - 18:25 run on 24 Nodes EPS 50 runs of (T399 L62 10-day + T255 L62 6-day) 1 6 5 5 - 1 8 5 6 T399 part run on 3 nodes and T255 part run on 1 node Finish of product generation 19:OO
-
-
3
The number of PEs used for the operational runs of IFS on hpce is half that used on hpcd to achieve the same run times - so in this respect the performance has increased by a factor of 2. The percentage of time spent in the different parts of the operational run are: 61% EPS : 22% 4D-Var : 17% High resolution forecast. The remaining time on the machines is used for Research Department experiments (mostly 4D-Var) and work submitted by the Member States of ECMWF (25% of the total time). The load on the system is about 35,000 jobs per cluster per day with a peak submission rate of about 50 jobs per second. Of these about 12,000 jobs are parallel jobs using MPI or OpenMP.
3. RAPS benchmark ECMWF measures the performance of its HPC systems by running IFS 4D-Var data assimilation and forecast models in configurations as near as possible to the current operations. These are supplied to computer manufacturers as part of the RAPS benchmarking suite. Figure 2 shows the scalability of the RAPS9 version of the IFS high resolution forecast on up to 2240 PEs of the Phase 4 IBM system compared with a RAPS8 run on the Phase 3 system and an old run of RAPS4 on the Cray T3E in 1998. Dedicating almost the whole Phase 4 system to one forecast run achieves 1.6 Tflops on 2240 PEs compared with 1 Tflops on the Phase 3 system and 70 Gflops on the T3E. The acceptance tests for the Phase 4 IBM system included timed runs of IFS from the RAPS8 benchmark release. These were successfully run as follows: Two T799 10-day forecasts were run on each cluster. This gave a speed-up of 1.72 over the Phase 3 system and ran at 14.3% of peak giving an aggregate of 2.43 Tflops per cluster. Two copies of 4D-Var were run on each cluster. This gave a speed-up of 1.55 over the Phase 3 system and ran at 8.1% of peak giving an aggregate of 1.37 Tflops per cluster. 47 copies of T399 EPS 10-day forecasts were run on each cluster. This gave a speed-up of 1.77 over the Phase 3 system and ran at lS.6% of peak giving and aggregate of 2.66 Tflops per cluster.
4
lorn0
-
IBM p575+ 2006 RAPS-9 T799 L91
imn I
'*.
n
R A P S 8 l 7 W L91
f 1n
/-----+
1
GRAY T3E-IZOO 1998 RAPS-4 T213 L31
I
102111
1[H1
Number of PEs Figure 2: History of RAPS benchmarks for IFS forecast showing scalability on large numbers of PEs
4. Resolution T799 to T1279
ECMWF's high resolution forecast is currently run at T799 spectral truncation (which corresponds to a horizontal resolution of 25 km) with 91 vertical levels. The horizontal grid is chosen so that there is an even spacing of 843,490 points over the globe. The time-step is 720 seconds. Figure 3 shows surface orography and the grid spacing over the British Isles at T799 resolution. T7mB 25km NGPTOT = 843,490 TSTEP :: 720 sacs
flops f o r p-+
foraeast
=
l.615 *lW
Figure 3: Model grid for T799 resolution over the British Isles.
5
In the future ECMWF plans to increase this resolution to T1279 corresponding to a 16 km horizontal grid spacing and 2,140,704 grid-points. The time-step will need to be reduced to 450 seconds. Figure 4 shows the grid spacing and the improvement to the surface orography at T1279 resolution.
TI279 16km N6PTOT =
2,140,704 TSTEP = 450 sscs
Flops for N-dQY
forecast =
7.207XM15
Figure 4: Model grid for TI279 resolution over the British Isles.
1~
For comparison Figure 5 shows the orography and model grid at the current EPS resolution of T399. I
T399 5okm WPTOT = 2W.988 TSTEP = Iswlsee5 flops f o r
. . . . . .
..,..
.... ....
::::"1
..
...
... ... .. -
.. .. .. ..
*
00
Waoy
forecast
O.l013"10"
Figure 5: Model grid for T399 resolution over the British Isles.
6
5. Performance of IFS forecast Figure 6 shows the performance and scalability of the IFS forecast model at T799 and T1279 resolutions run on different numbers of PEs (from 384 to 2240) on the Phase 4 hpce system and on the Phase 3 hpcd system. In the hpcd case 4 OpenMP threads were used and for the hpce runs 8 OpenMP threads with SMT were used. The sustained percentage of peak for a T799 run on 384 PEs (96 MPI tasks), increases from 8.34% to 12.96% from hpcd to hpce. The number of PEs used for operational runs of the T799 forecast is currently 384 PEs on hpce while on hpcd 768 PEs were needed to get a same elapsed time. On hpce T1279 runs on 2240 PEs with a sustained percentage of peak that is slightly higher than achieved by T799 on 768 PEs. 16
-iz 2
*D
,
I
l4 12
10 PI m 8 U
t
c
6
u
4
a
2
Y
t
0
hpce Tl279 hpce 7799
-+.
c
500
hpcdT799
1000 1500 2000 2500
Number af PEs
Figure 6: RAPS9 - 10 day T799 L91 and TI279 L91 forecasts - percentage of peak
The traditional speed-up curves for T799 and T1279 on hpce are shown in Figure 7 compared with the ideal speed-up. Both Figures 6 and 7 show the improvement in scalability and performance for T1279 compared with T799.
7 2240
1
-
"799
0
0 0
500
1000
1500
2000
2500
Nmhr o f PEs Figure 7: RAPS9 - 10 day I799 L91 and TI279 L91 forecasts - scalability on hpce.
Figure 8 shows the scalability when a T799 forecast was run on up to 6144 PEs using MPI only on a Cray XT3 system. This was a run done without any special optimization for the Cray and is the first time that the IFS has been run on such a large number of PEs. It shows that as the number of PEs is increased the runtime keeps decreasing - even though the efficiency at these large numbers of PEs is not ideal - and some work needs to be done to improve this.
I
0
2000
4000
--c
Ideal I
6000
Numbmr of PEr Figure 8: RAPS9 - T799 L91 10-day forecast on Cray XT3 at ORNL
8
6. Profile of IFS forecast Figure 9 shows a comparison of pie-chart profiles for T799 and T1279 10-day forecasts. This was run on 96 nodes of hpce - that is using 1536 PEs. At both resolutions most of the time is spent in the physics (for example the cloud and convection parameterisations). As the resolution is increased the N3 behavior of the Legendre Transform (LT) can be seen as the percentage increases from 5.51% at T799 to 9.56% at T1279. The total time for the message passing communications is split between the Transpositions - which is the time for the re-distribution of the data before each transform and the semi-Lagrangian (SL) communications - which is the wide-halo communications for the semiLagrangian interpolations. The percentage of the time in the Wave Model (WAM) decreases as this has been run at 0.36 degree resolution in both cases. T799
I
1' 1 W Dynamics IRadiation SL WAM W LT W FT W SL Comms W Transpositions W Barrier
1
W Physics I Spectral Radiation
SL 0 WAM
II'
I SL Comms TransDositions
Figure 9: RAPS9 - Profiles of T799 and T1279 10-day forecasts on hpce - 96 nodes.
9
The variation of the performance of different parts of the IFS is shown in Figure 10. This shows the Mflops per PE for the top 3 routines from a T799 forecast run. The routine used in the Legendre Transform - MXMAOP - which calls the library matrix multiply routine goes fastest at nearly 5 Gflops - this table also shows that the use of SMT for this routine in not beneficial. This is because the optimized library routine fully uses the floating-point units and doubling the number of threads per PE cannot get any improvement. The other routines CLOUDSC (the cloud physics) and LAITQM (the semi-Lagrangian interpolation) do gain from SMT - which is the case for most of the routines in IFS. The sustained performance that can be achieved in these routines is much lower than the matrix multiply as they contain more conditional code and memory traffic compared with the number of floating-point operations.
p e r PE
M f lop/s M f lop/s p e r PE p e r PE hpce (no SMT) hDce (SMTI 5 59 758
I
MXMAOP 13267
5270
4700
987
1350
Figure 10: RAPS9 - Mflops per PE for top three subroutines in T799 IFS forecast
A summary of the comparison of the T799 and T1279 performance is given in Figure 11. To run in the same wall clock time as T799, T1279 needs 4 times as many nodes. On 140 nodes the T1279 forecast, parallelized to 4480 processes (560 MPI tasks and 8 OpenMP threads), achieves 2 Tflops - with just under 14% of the time spent in message passing communications.
10
Resolution
7 1 Tflopis % of
Nodes
t
MPI x OMP
T799 L91
24 Nodes
TI279 L91
96 x 8 96 Nodes
384 x 8
p 9 9 L91
140 Nodes 560 x 8
TI279 L91
140 Nodes 560 x 8
Figure 11: RAPS9 - T799 and TI279 10-day forecasts compared on hpce.
7. SMT and OpenMP The benefit of OpenMP and SMT for the IFS which has been parallelized using a hybrid MPUOpenMP scheme is demonstrated in Figure 12. The percentage of peak achieved with the same number of PEs but increasing the number of OpenMP threads is shown for runs with and without SMT. For the runs without SMT the performance does not increase for numbers of threads greater than 4 but with SMT - 8 threads gives a small improvement over 4 threads.
I
1
2
3
4
5
6
7
8
Number o f threads per MPI tusk Figure 12: RAPS9 - T799 L91 10-day forecast - OpenMP threads/MPI task on 96 nodes.
11
8. Performance of 4D-Var The performance of 4D-Var data assimilation on hpce is shown in Figure 13. This is for a T799 L91 run with T9.5 and T255 resolutions in the two stages of minimization. The best performance is in the second minimization which contains full physics - this is by far the most expensive part of the calculation as can be seen in the total floating-operation counts in the last column.
Figure 13: RAPS9 - T799/T95/T255 L91 4D-Var run on hpce - 16 nodes.
9. Summary The Phase 4 IBM Power5+ system is 1.5 - 2.0 times faster than the Phase 3 IBM Power4+ system for IFS forecast and 4D-Var SMT works well with a hybrid MPI+OpenMP parallelization to give a higher sustained percentage of peak performance The IFS scales well with resolution increase
THE NERSC EXPERIENCE IMPLEMENTING A FACILITY WIDE HIGH PERFORMANCE FILESYSTEM WILLIAM T.C. KRAMER' National Energy Research Scientific Computing Facility (NERSC) Lawrence Berkeley National Laboratory One Cyclotron Road Berkeley, CA 94720 kramer @ nercs.gov The NERSC Global Filesystem (NGF) is a high performance, parallel access file system that provides a common namespace and file access for all NERSC users across all NERSC systems. This paper discusses the motivation for implementing NGF, the implementation details of the system now that it is in production, the experiences of operating such a system as well as the positive impact it has had on user productivity. The paper ends with a look forward to future expansion and enhancement of NGF.
1. Introduction NERSC traditionally provides systems and services that maximize the scientific productivity of its user community. NERSC takes pride in its reputation for the expertise of its staff and the high quality of services delivered to its users. To maintain its effectiveness, NERSC proactively addresses new challenges. Based on our interactions with the NERSC user community and our monitoring of technology trends, we observe three trends that NERSC needs to address over the next several years: The widening gap between application performance and peak performance of high-end computing systems The recent emergence of large, multidisciplinary computational science teams in the Department of Energy (DOE) research community The flood of scientific data from both simulations and experiments, and the convergence of computational simulation with experimental data collection and analysis in complex workflows.
' This work is supported by the Office of Computational and Technology Research, Division of Mathematical, Information and Computational Sciences of the U S . Department of Energy, under contract DE-AC03-76SF00098.
12
13
NERSC’s responses to these trends are the three components of the sciencedriven strategy that NERSC will implement and realize in the next five years: science-driven systems, science-driven services, and science-driven analytics. This balanced set of objectives will be critical for the future of the enterprise and its ability to serve the DOE scientific community. This paper discusses one of the major NERSC initiatives that is in specific response to the dramatic increase in the data arriving from DOE-funded research projects. This data is stored at NERSC because NERSC provides a reliable longterm storage environment that assures the availability and accessibility of data for the community. NERSC helps accelerate this development by deploying Grid technology on its systems and by enabling and tuning high performance wide area network connections to major facilities, for cxample the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. Another key aspect to address the flood of scientific data is providing a complete, high performance infrastructure for data storage. This paper concentrates on NERSC’s experience in building a facility wide global file system, called the NERSC Global Filesystem (NGF) that is providing a new feature of the underlying infrastructure to address the increased reliance on data resources. NGF completes an environment that allows easier simulation, data reduction, analysis and visualization of large datasets derived from both simulation and experiment.
2. The NERSC Storage Vision The NERSC storage vision range is to implement an integrated storage system that would make advanced scientific research more efficient and productive. The storage vision has three major components: on-line storage local to systems, a large multi-Petabyte archive and the new component - the NERSC Global Filesystem. Indeed, a successful NGF may eventually subsume the other layers. Further the storage system should simplify end user data management by providing a shared disk file systems for all NERSC production systems. We call this the NERSC Global Filesystem or NGF.
2.1. The preliminary period - GUPFS Over the past several years, NERSC has gone from a series of evaluation efforts to a complete, production NGF. The evaluation effort spanned several years and was called the GUPFS project’. GUPFS stands for the key attributes for such a system - namely a Global, Unified, Parallel File System. Global means a file system is shared by major NERSC systems, currently operating or planned for
14
the future. Unified means the file syslerns uses consolidated storage and providing a single, unified name space for all users on all systems. It also implied automatically sharing user files between systems without replication. Parallel indicates the file system provides performance that is scalable as the number of clients, connections and storage devices increase. Additionally, the performance of the global file system should be very close to that of a native (single cluster) file system when the number of connections, clients and storage devices are equivalent. Further expectations of the global attributes are the file system will integrate with near-line file archives to provide extremely large, virtual storage allocations for user that are automatically managed. Direct integration with Grid functionality and the ability to geographically distribute the single file system are desirable attributes as well.
2.2. GUPFS Lessons During the evaluation period, NERSC staff realized several distinguishing features were needed for a successful implementation. The first realization is are three layers of an implementation that must be considered. 1. The storage devices and controllers. This layer provides the fundamental storage devices along with their controllers. There are many choices for the discrete components for this layer, and different solutions fit different performance and cost profiles. The devices may be controllers such as Data Direct Networks S2A95002 or LSI's Engenio 6998 Storage System3. These devices can be S A N or NAS storage. Likewise the storage units can be SATA, SCSI, fibre channel or other types. Since many controllers support different levels of disk, this layer may be considered to have two sub layers, but for the purposes of this paper, we will consider it one. 2. The connection fabric between the storage units and the systems is another layer. This is the area that is most different between single system and facility wide parallel file systems. Fabrics range include GigabiVlOGigabit Ethernet4 to Fibre Channel5 to Infiniband'. Single system parallel file systems usually have a single interconnect fabric between the storage devices and the client nodes. However, in a facility wide file system, with systems at different price points, built in connection interfaces and configurations requires a global file system to span multiple fabrics. 3 . The third layer of technology is the facility wide file system software, for example IBM's General Purpose Filc System (GPFS)7 or Cluster File Systems Lustre'.
15
This evaluation project assessed a wide range for technology for functionality, interoperability and performance. Reports are on the NERSC GUPFS web site', including two annual summaries",". Other insights from GUPFS were YO performance is primarily a function of the hardware provisioning (both storage devices and interconnects), while metadata performance and reliability are mostly the function of the file systems. Another conclusion of the evaluation is the key, long term decision is which file system is selected. It was also determined that all three layers of technology were robust enough to move to production in 2005/2006. Technology assessed in GUPFS includes: Storage - Traditional Storage: Dot Hill, Silicon Gear, Chaparral - New Storage: Yotta Yotta GSX 2400, EMC CX 600, 3PAR, DDN S2A 8500 Fabrics FC (1Gb/s and 2Gb/s): Brocade Silkworm, Qlogic SANbox2, Cisco MDS 9509, SANDial Shadow 14000 - Ethernet (ISCSI): Cisco SN 5428, Intel Rr Adaptec iSCSI HBA, Adaptec TOE, Cisco MDS 9509 - Infiniband ( l x and 4x): Infinicon and Topspin IB to G E F C bridges (SRP over IB, iSCSI over IB), Inter-connect: Myrinnet 2000 (Rev D) File Systems - Sistina CFS 4.2, 5.0, 5.1, and 5.2 Beta - ADIC StorNext File System 2.0 and 2.2 Lustre 0.6 (1.0 Beta l), 0.9.2, 1.0, 1.0.{1,2,3,4) IBM GPFS for Linux, 1.3 and 2.2 Panasas
3. The NERSC Global Filesystem (NGF)
Before talking about the details of NGF, it is useful to consider a real example of a climate workflow done with a traditional storage arrangement. The exploration of the impact of climate warming on severe storm activity is an important area that requires high resolution climate models (on the order or 10-20 km.) as well as many members in an ensemble suite. This task was used to explore the impact of facility wide file systems at SC 05 as part of the Tri-Challenge (problems that are simultaneously analytically, bandwidth and storage challenging). A group at
16
LBNL made up of Will Baird, Michael Wehner, Jonathan Carter, Tavia Stone, Cristina Siegerist and Wes Bethel addressed this problem in a novel manner using NGF, and won an award for their solution'*. The workflow for this analysis is to 1) make multiple runs of the fvCAM with different boundary conditions using a high-resolution grid, 2) take the output of fvCAM and 3( filter out the data of interest, and then visualize the formation of storms in the North Atlantic basin. The goal was comparing real hurricane data with simulation results. This is a fairly generic workflow that holds true for many HPC applications. With traditional storage arrangements where all storage isolated to each system, two approaches are possible. In the first, all steps of the workflow would be done on a single system. However, it is rare that a single system is optimized for all step in a workflow. Alternatively, different systems, with appropriate balance for each step, could be used. This entails multiple network transfers of data sets to and from systems. Figure 1 illustrates the latter approach. There are several drawbacks to both approaches. If one system is used, it is less efficient for certain steps of the workflow. If the alternative is used, the user spends time copying files, assuring correct transfer, assuring alignment of files if reruns are needed. Both approaches decrease science productivity. For the second approach, there are also wasted resources in that extra network bandwidth is used for transfer and in most cases, there will be redundant files left on one system while the other system is used. NGF improves the productivity of the scientist and the system by reducing data transfers, providing more efficient storage access, reducing data redundancy and eliminating workflow steps.
3.1. The NGF Vision Based on the experience from the GUPFS evaluation project, NERSC went forward with a production implementation of the NERSC Global Filesystem in the late 2005. The vision for NGF is an expansion of the GUPFS goals. The NGF is a single storage pool, decoupled from individual NERSC computational and analytics systems. The NERSC system architecture and range of systems is shown in Figure 2.
17
l=
Displa
Figure 1: The multiple workflow steps of doing and ensemble analysis of the impact of global warming on severe weather in the North Atlantic basin
HPPS 100 TB of cache disk STK robots, 44,000 laps slots, max capacity 44 PB
F, 1.2 TB Of Memory TB of Shared Disk
cray XT NERSC-5 "Franklin" 19,584 Processors(5.2 Gflopls) SSP-3 -16.1 TllDDlS 39 TB Memo& MO TB 01 shared disk Ratio (.4, 3)
-
Ratio = (RAM Bytes per Flop, Disk Bytes per Flop)
Figure 2: The NERSC system architecture shows the major NERSC systems and interconnects as of February 2007. These include several systems that were in the top 10 in the world at their time of introduction.
2%
Figure 3: The distribution of NERSC’s computational allocations by science discipline for 2007 shows a wide range of science projects needed to be supported. Note the distribution shifts each year.
Figure 3 shows NERSC supports a large and diverse set of scientific projects - about 400 projects this year - ranging from single principle investigators to large collaborations with 100’s of scientists and very large scale projects that address national needs. Because of the diversity, NGF must support diverse file access patterns; supporting large and small, many and few, permanent and transient files. All supported systems have access to all storage, regardless of their fabric or system type. Users see the same file from all systems with access to data as soon as it is created or modified. There should be no need for data replication except as formal backups to off-line storage. Performance of any particular system is determined by the fabric used for connections. NGF is high performance and capable of supporting large capacity storage. The performance should be near that of native file system performance. It has to have flexible management of storage resource and be able to accommodate new storage (faster and cheaper), fabrics and systems as needed. Reliability is important as NGF becomes more a single point of access for all data. NGF will be integrated with the NERSC mass storage - currently HPSS with the potential to provide direct HSM and backups through HPSS without impacting computational systems. NERSC will continue to provide archive storage as well as on-linehear-line, regardless of NGF, so it is not meant to present a unified namespace with HPSS.
3.2. NGF Implementation The initial version of NGF went into service with 20 TBs of disk in September 2005. The NGF System Architecture is shown in Figure 4. This system
19
supported five different client systems from 4 different vendors. It has since been expended to have more than 70 TBs of file storage.
Figure 4: The initial implementation of NGF. Note that each system has its own parallel file system, since they already existed when NGF was deployed. As part of the future systems, NERSC will consider buying systems without disk and having the NGF expanded to support new systems.
NERSC traditionally has two layers of storage on its major systems, home and /scratch. home file systems are intended to have guaranteed storage for users with full backup on a regular basis. Home is provided as a place for users to store their program source, scripts, job input and other data. It usually represents between 10 and 20% of the storage on a system and is managed by placing disk quotas on users and groups. At the other end, /scratch is designed to be highly volatile to hold job output between runs. It is typically heavily over subscribed and files are expected to reside on scratch from days to at most a few weeks. The initial production implementation NGF introduced a third type of storage class called /project. This storage class is in response to requirements of a number of our science projects to need a shared repository to hold data and codes that are common across large projects with many users. Examples of this data may be common code trees for project applications or a large data set that is used by all members of the project. Project data is typically large and persistent for substantial periods.
20
3.3. NGF as of April 2007
The current production NGF file system software is implemented as sets of stand-alone GPFS 3.1 PTFlO owning clusters. The NGF is separate from all client production systems and includes 24 GPFS NSD servers and service nodes, running Linux SLES9 SP3. The major client systems mount NGF file systems as GPFS multi-cluster remote clients. Each of the separate NGF owning clusters has its own file system as well. The NERSC legacy systems mostly access NGF data over 10 Gigabit Ethernet with some of the more recent systems using Fibre Channel to access NGF data directly. NGF has over 70 TB usable storage space using DDN 8500 with SATA drives, IBM DS4500 SATA drives and IBM DS4500 FC drives. It is configured to support 50 million inodes. The initial configuration was designed for function and stability rather than performance but still, the system provides 3+ GB/s bandwidth for streaming UO. Most storage and servers external to NERSC systems are distributed over 10 Gigabit Ethernet infrastructure, with two systems. NGF supports a range of both large and small files. Since currently it is all persistent data it is backed up to the NERSC HPSS archive. Current default quotas are 1 TB of storage and 250,000 inodes per projcct, but there are several projects with significantly more quota and others who have requests for increases pending. NGF is currently mounted on all major NERSC systems (1250+ clients). The systems include Jacquard*, a 740 CPUt LNXI Opteron System running SLES 9. LNXI has first and second level support responsibilities, with Level 3 and 4 support provided by IBM. DaVinci, a SGI Altix running SLES 9 SP 3 has direct storage access via fibre channel connections. The PDSF is a self integrated “Beowulf’ Linux cluster with over 600 IA32 CPUs cluster running Scientific Linux. Bassi, an IBM Power5 running AIX 5.3 and Seaborg, IBM SP running AIX 5.2 are IBM supported systems and the most computational powerful. Global /project access characteristics are that it is remotely mounted with RIW access, with nosuid and nodev mount option in place and root mapped to nobody:nobody. This provides protection in case one system is compromised due to a security incident, the attackers cannot gain privileges on other systems due to the fact they share access to NGF.
*
Note Jacquard was rhe first non-IBM hardware to provide supported GPFS.
’For the sake of this paper, a CPU is the same as a core on a multi-core socket system
21
4. NGF Usage and User Satisfaction As of May 2007, there are 77 projects using NGF (-20% of all NERSC projects). Access is granted by the request of the principle investigator of each project. 62.3 TB (2”40) of the file space is in use (88% of capacity) with 11.7 Million inodes used (24% of capacity). As with most file systems, the majority of the bytes are used by a minority of the projects. The breakdown for NGF usage is 14 projects account for -70% storage used but only 4 projects account for -50% storage used. Likewise 12 projects account for -70% of the inodes used and 3 projects account for -50% inode used. Figure 5 shows the usage of NGF according to science disciplines over time. An interesting point is the amount of storage in the “other” category that represents 67 projects and is growing significantly of late.
Figures: The m o u n t of data stored on NGF by science discipline over time.
Reliability of file systems is an important aspect of user satisfaction. Over the initial production operation there were only eight outages in 326 days of NGF production service. This means the NGF System wide Mean Time Between Failure (MTBF) was 41 days with Mean Time To Repair (MTTR) of only 4.75 hours. The causes ranged from a gateway node crash resulting in a GPFS hang (reported as a GPFS bug), a FAStT disk failure, server crash, FC switch failures (reported as firmware bugs) and a DDN disk failure that generated a controller failure (reported as a DDN firmware bug). All bugs were fixed by the vendor in a relatively short time. There have been series of 165,70 and 39 days without any user observed failure. In fact, the FC switch failures
22
generated multiple outages over a 9 hour period. If these multiple outages are treated as a single outage, then there are 6 outages in 326 days, a MTBF of 54 days and MTTR of 9 hours and 24 minutes. Reliability has since improved by implementing pro-active monitoring, developing better procedures for troubleshooting, pro-active replacement of suspect components and the replacement of a set of old server nodes, along with implementing the bug fixes from vendors. NERSC uses Nugios for event detection and notification. Events include disk faults, soft failures, and server crashes. Nagios allows event-driven procedures to notify operations staff who can respond to problems. Further, C u d is used for performance tracking of NSD servers for disk VO, network traffic, cpu and memory usage, load average; of Fibre Channel switches for FC port statistics, fan, temperature; and of DDN controllers for FC port statistics (1O/s, MB/s). Thus, between Oct 1, 2006 and March 31, 2007, despite the increased load, NGF had only two hardware and one software failure. The NGF metadata and controllers were put onto Uninterruptible Power Systems in January 2007, and there have been no failures since.
4.1. Users are Satisfied with NGF The NERSC users of NGF report high levels of satisfaction. According to user testimonials, they enjoy increased ease of use for NGF, and the fact they can save large amounts of data using Unix file groups rather than HPSS repositories is a plus. NGF users feel the quotas and performance are sufficient for their projects, but of course would always like more. Some noted that the extended outage on one of our systems had less impact for NGF since they were able to promptly move to other NERSC systems rather than waiting for the one system to return to service. One of the more telling user satisfaction indicators is the NERSC annual user survey. In the NERSC 2005 User Survey13 users mentioned moving data between machines was an inhibitor l o doing visualization and other work. This survey was taken just as the NGF was entering service. One year later, moving files between machines did not come up as an issue on the 2006 survey, and users were satisfied with NGF reliability and performance.
5. The Future of NGF The initial success of NGF, and the overall positive user response means NERSC is pursuing the next stages of NGF. One step is to continue to expand the file
23
system with more storage to keep up with the increasing demand. The other is to plan to include NERSCJ, the next major computational system at NERSC. This system is the largest Cray XT-4 in existence, with almost 20,000 cores. As an experiment, NERSC demonstrated GPFS can be deployed on the XT service nodes and coexist with its native Lustre file system. Once the system is in full service, NERSC will integrate it with NGF, using one of several methods. NERSC is also actively planning a long term solution that assures a highly effective facility wide file system that supports all NERSC systems, regardless of manufacturer, architecture or size. This solution will expand on the experiences to date and will cover h o m e and /project file directories, and likely even the /scratch file directories for newer systems, all at high performance rates. Future NERSC systems will be required to seamlessly integrate with NGF, so that the future NERSC system architecture will resemble Figure 6. Another addition to the NGF systems will be a integration with the NERSC HPSS Archive to provide a hierarchical storage management between files stored in NGF and the archive.
Figure 6 : The NERSC System Architecture as new systems come into service.
6. Summary Four years ago, NERSC set a goal of having a single uniform global file system running at high performance to support all its major systems. Two years ago, we understood what needed to be done to realize that goal for our diverse user community. Now, we have achieved that goal and have in production a reliable, highly functional facility-wide global file system that is facilitating increased
24
productivity for the NERSC user community. We are implementing a path forward that allows all architectures, be they legacy, current or future, to participate fully in NGF. Two years from now we expect to report all systems and users are using the NGF, many exclusively as their only storage. Acknowledgments
I want to acknowledge all the hard working NERSC staff who have brought NGF from a vision 5 years ago to a full fledged reality today. In particular, Greg Butler and Rei Lee who are the NGF project leader and senior technological member respectively. Further special thanks go to Jim Craw, Will Baird, Matt Andrews and Jonathan Carter.
http://www.nersc.gov/projects/GUPFS/ http://www.datadirectnet.com/s2a95OO.html http://www.lsi.com/storage-home/products-home/external-raid/index.html See httD://en.wikiDedia.org/wiki/Gi,oabit Ethernet, htt~://www.sirrcon.com/Pubs/news/2 6.htm and http://standards.ieee.org/getieee802/802.3. html 1
httr,://cn.wikiDedia.or,o/wiki/Fibrcchanncl, and
http://tools.ietf.org/html/rfc4369 6 http://www.infinibandta.org/home http://www-03 .ibm.com/systems/clusters/software/gpfs.html
'
http://www.lustre.org/
http://www.nersc.gov/projects/GUPFS/ LBNL-52456 (2003). Butler, G. F., R. C. Lee, C. E. Tull, M. L. Welcome and W. C. L. (2004). The global unifiedparallelfile system (GUPFS)project: FY 2003 activities and results. http://wwwlibrary.lb1.gov/docs/LBNL/524/5 6/PDF/LBNL-5 245 6-2003. pdf 11 LBNL-52456. Butler, G. F., R.C. Lee and M. L. Welcome (2003). The global unified parallel file system (GUPFS)project: FY 2002 activities and results. http://www-library.lbl.gov/docs/LBNL/524/56/PDF/LBNL-52456.pdf '* www.nersc.gov/news/nerscnews/NERSCNews~2006~02.pdf l 3 http://www.nersc.gov/news/survey/ 9
lo
RECENT DEVELOPMENTS OF NCEP GFS FOR HIGH PERFORMANCE COMPUTING
HANN-MING HENRY JUANG Environmental Modeling Center, National Centers for Environmental Prediction, NOAA, 5200 Auth Road, Camp Springs, MD 20746, USA The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) has been implemented to have generalized hybrid vertical coordinates to couple with the space environmental model, and to make possible runs with very high resolution to couple with land, ocean, and ice models. The ESMF (earth system modeling framework) is used to be the super-structure of the model in order to have a common model configuration to couple with other ESMF-type models. All the development activities approach the requirements of high performance computing, in terms of high resolution in the vertical and horizontal with coupling among different models. In this paper, the most recent results relating to the vertical coordinate and possibility to couple with the space environmental model are presented. All others relate to ESMF coupling and high resolution in the horizontal (related to spectral transform) are illustrated as on-going developments. To couple with the space environmental model, a generalized vertical coordinate system is implemented in GFS; a specific function to describe the vertical coordinates is used and is shown to have positive impact, especially from use of the sigma-theta hybrid type of vertical coordinates.
1. Introduction The National Centers for Environmental Prediction (NCEP) has been running a global spectral model (GSM) in operations since the late 80s, and has been using high performance computing to meet the requirements of the operational schedule. This model has been through several code-reconstructions; initially for vector systems (CDC Cyber 205 and CRAY I), then multi-tasking systems (CRAY XMP and YMP), and finally supporting distributed / shared / multi-processor / multi-threading systems from IBM. It was renamed as the NCEP global forecast system (GFS) around late 90s. Since then many improvements have been made to the model physics [ l ] [ l O ] and increasing horizontal and vertical resolutions [7][8], while the model dynamics has had little development. The components of NCEP GFS dynamics are the hydrostatic primitive equation set, spectral computation by spherical transform in horizontal
25
26
discretization, leap-frog time scheme with semi-implicit, and zonal damping based on maximal wind speed at a given model layer. The vertical discretization is second order finite difference scheme. Since certain changes to the NCEP GFS may influence other components of the model, model preparation (such as data assimilation), and model post-processing, even the downstream models, especially model dynamics such as changes of prognostic variables in certain cases, thus incremental changes to the NCEP GFS model dynamics was proposed. Models such as ECMWF’s IFS and the UK Met Office’s UM use a semi-Lagrangian scheme which provides a significant saving in model execution time. Such an approach has been under-development at NCEP for some years but still requires further development (J. Sela 2006, personal communication). For hybrid vertical coordinates, there are several schemes being used in models, such as sigma-pressure used in ECMWF IFS and sigma-theta used by research models such as University Wisconsin global model [9] and University of California at Los Angeles global model [2]. We will introduce our approach to this in this paper. Increasing model resolution, refining model numerical methods, and improving parameterizations of model physics all drive the need for increases in the capacity of computer systems. Our recent implementation of model super-structure by earth system modeling framework (ESMF) will be used for coupling either different members of GFS ensemble forecasts or other different models, such as ocean, land, and the space environmental model. ESMF coupling can be run in sequential and parallel mode, both modes require computational resources of high performance computing; we will give one on-going development example to show the need of tera computing in this paper. In section 2, the recent developments, mainly in model dynamics, are mentioned; including ESMF, vertical coordinates, and high resolution concerning spectral transformation. Some results from recent developments are also shown in section 2. The discussion and future concerns are in the last section.
2. Recent Developments The main implementation in this paper is for model structure and model dynamics only. The recent changes related to model physics are not included in this paper.
2.1. Preparation f o r coupling models - ESMF In order to have common superstructure and/or interfaces among models for easy coupling, the first step of recent model structure/dynamics changes is to implement the common model superstructure to provide common interfaces for each model. The selected common software package for the common
27
superstructure is based on the inter-agency agreement among institutes in the USA. The selected one is called ESMF as mentioned in the previous section. The concept, method and software download can be found on the web at http://esmf.ucar.edu. There are superstructure and infrastructure softwares in ESMF, but we decided to use only its superstructure and some infrastructures, such as time clock management and error message. The most recent implementation of ESMF superstructure for GFS is ensemble forecast. We use ESMF to have several copies of GFS running together simultaneously. GFS can be run with multi-process by MPI and multi-thread by OpenMP as a single instruction multi-data system, however the single instruction element has been implemented as an object oriented program by ESMF, so that several copies of GFS are running together as a multi-instruction multi-data system. Table 1 shows the wall clock timing among three configurations. Adding ESMF to NCEP GFS to run 11 members of GFS concurrently requires about the same wall time as doing separate parallel runs with each member using 5 cpus. This indicates that adding ESMF to NCEP GFS requires negligible extra computational time, and concurrent/parallel runs are better than running members sequentially. Table 1. Timing for NCEP GFS in the resolution of TI26164 with ESMF in ensemble forecasts for 48
T 126L64 48 h forecast concurrent parallel sequential
numbers
CPUS
11 1 1
55 5 65
Event Wall time(s) 2082 2086 214
Total Wall time(s) 2082 2086 2354
2.2. Generalized coordinates to couple with space environmental model The Space Environmental Center (SEC) is one of the centers under NCEP, and has a model to predict space weather around the very top of the atmosphere. The sigma vertical coordinates used in NCEP GFS may not be appropriate to couple with SEC model, which is not using sigma vertical coordinates but pressure coordinates. Not only because of coupling with SEC model but also because of the apparent numerical error generating from sigma coordinates, modifying vertical coordinate into generalized vertical coordinates should help further development of model dynamics. Changing vertical coordinates into generalized vertical coordinates involves not only re-writing model dynamics equations but also the vertical discretization of the model dynamics. The model dynamics equation and model discretization
28
can be found in NCEP Ofice Note number 445 [41. The multi-conserving schemes including angular momentum, total energy, potential temperature, and total mass conservations are considered in the vertical discretization. The generalized vertical coordinates can be used with any form of vertical coordinates; however a specific function is used to further specify the vertical coordinates, so the model can run pure sigma, sigma-pressure, sigma-theta, or sigma-theta-pressure combination. Basically, near the ground surface, the model coordinate is sigma as terrain following, next to the first lowest model layers will be the mixed of sigma with either pressure or potential temperature.
Fig. 1 The anomaly correlations from NCEP GFS using three different generalized hybrid coordinates
Figure 1 shows the anomaly correlation during a period of time among p r e l i ~ ~ n aresults ry of three different combinations of hybrid vertical coordinates from the previous mentioned specific vertical functions; including pure sigma, sigma-pressure and sigma-theta. It shows that sigma-theta coordinates result in consistent better scores as compared to sigma and sigma-pressure. Note that, not shown here, the anomaly correlation scores over hemisphere between 20 to 80 degrees show nearly no difference. Thus, the version of sigma-theta hybrid vertical coordinates is used for a longer period of daily run and compared to operational version of pure sigma vertical coordinates. Again, the anomaly correlation over hemisphere of sigma-theta hybrid coordinates has no improvement as compared to operational, but the root-mean-square (RMS) error
29
of wind field over tropic area, the results from the sigma-theta hybrid vertical coordinates shows less error than those of the operational model [5]. Figure 2 shows the frequency of the best performance of hurricane track forecasts among different version of NCEP GFS, the hybrid sigma-theta is the best among all configurations, including operational GFS, two ready-to implemented GFS, and sigma-theta version GFS.
Fig. 2 Frequency of tbe best performance as compared among four e.xpedments; AVNO for operational GFS, PARH and PARM for two parallel GFS to be selected possibly for the next operational implementation, and FDST for sigma-theta generalized hybrid GFS.
As mentioned previously, couple SEC model with NCEP GFS is one of the major reasons to have generalized vertical coordinates, so that we should Rave pressure layers at the top layers for easy coupling to SEC model. Since SEC model deals with different gases from the lower atmosphere. It would be more accurate to treat gas constant and specific heat capacity for different gases and model tracers. In the first step, we treat dry gas as one gas, moisture, ozone and water vapors as other gases. In order to consider this, the thermodynamic equation can be re-derived from internal energy equation. After the derivation, the ~hermodynam~cs equation involves time dependent of specific heat capacity at constant pressure, Instead of using temperature and heat capacity as separate time dependent variables, they can be combined together and form as enthalpy, then the possible complicity in thermodynamic equation due to time dependent of
30
specific heat capacity can be simplified to be a similar form as used virtual temperature in the prognostic equation. The detailed derivation of the thermodynamic equation and others related to enthalpy as prognostic variable will be published elsewhere. 2.3, Fine ~ e s o l ~ to ~ couple on with land/ocean/icemodels One of the reasonable questions to ask on increasing horizontal resolution of the spectral model is whether the spectral model can resolve local high resolution. Though there are global spectral models already show the capability to run very fine resolution, such as 10 km of Japan model [I I], it will be better to reassure that NCEP GFS can do the same in such fine horizontal resolution. Model terrain
(mf T382
Rg.3 Terrain height in meter and wind vector after 6 hour of integration from T382 GF§.
31
Mode! terrain (rnt T1278
Fig. 4 Tenah height in meter and wind vector after 6 hour of integration from T1278 GFS.
Figures 3 and 4 show the wind field over the island of Hawaii after 6 hour of integration by T382 and T1278, respectively, and Figs. 5 and 6 show the rainfall patterns from T382 and T1278, respectively. The lee side vortex in TI278 cannot discern in T382 as compared between Figs. 3 and 4.
Fig. 5 One-hour accumulated rainfall from T382 after 7 hour of integration.
32
Fig. 6 One-hour accumulated rainfall from TI 278 after 7 hour of integration.
The mesoscale features of rainfall, such as mountain related rainfall along the border of China and North Korea, the frontal rainfall patterns along the China Yellow Sea, South Korea, and Japan Sea area, and the cloud-street type of rainfall over tropic oceans, are shown from TI278 but not shown clearly at T382. All these indicate the T1278 can produce mesoscale feature while its initial condition is from T382. Increasing horizontal resolution requires more "computa~ionaltime due to increasing number of grid points and increasing number of time step (with small time step) for stability. Table 2 shows the dimensional factors between T382 and T1278. It indicates that the increase in c o m ~ u t a ~ ~ cost o n a is ~ about 50 for going from T382 to T1278, though only a 3.35 times increase in wave t r ~ ~ ~ a ~ i o n .
The increase of the reso~u~io~dimension used to have linear-increases in computational time. This condition may not the same due to the spectral computation in horizontal discretization. NCEP GFS has reduced Gaussian grid
33
for spherical transform between spectral coefficients and grid-point values. Though the reduced grid saved computational time more while the horizontal resolution is increased 631, the increase of compuiat~ona~ time by the Legendre transform is not arithmetic increases but geometric increases. Figure 7 shows a breakdown of the wall time at T1278, showing more than 70% of the wall time being used by the Legendre transformation. It indicates that high performance computing is required to support increases in horizontal resolution in NCEP’s GFS. However, from the time spent in the Legendre to grid barrier j17%), this indicates that the design of the decomposition may have space for improvement, such as going from 1-D [I21 to 2-D [ S ] with threads. Furthermore, semi-Lagrangian can be implemented, so that spectral model can reach efficient performance such as ECMWF IFS [I31 shown in this volume.
Fig. 7 The pie chart of the wall-time spends for TI278 NCEP GFS
3. Discurssion Im~lementat~on of ESMF into NCEP GFS makes capabilities to use HPC in tera-computing, not only multi-model coupling but also in multi-members interactive ensemble forecasts simultaneously. The other way to put NCEP GFS in the stage for using HPC is the classical way by increasing mode1 resolution both in vertical and horizontal. Increasing vertical resolution with the extension of the model top to couple with SEC model leads to implement generalized coordinates and model variables to handle
34
different gases as well as tracers. The generalized vertical coordinates and the enthalpy form of thermodynamics equation with gas constant and specific capacity dependent to different gases are the first steps to implement into GFS for considering model extension with increasing resolution in the vertical. Nevertheless, the further consideration of non-shallow atmosphere should be considered later. Increasing horizontal resolution does not need to concern about gas contents as those of the extension in vertical domain, but two concerns should be considered: one is the timing problem of Legendre transformation as shown here, the other is validation of the prediction equations, the non-hydrostatic model system should be considered. Thus, increasing model resolution in all directions with extension of model top will require NCEP GFS to relax all dynamics approximations to be non-hydrostatic non-shallow atmosphere system with consideration of different gases, and to speed up Legendre transformation.
Acknowledgement Thanks to Dr. Wei-Yu Yang for providing information on model testing with ESMF components.
References 1. S.-Y. Hong and H.-L. Pan: Nonlocal boundary layer vertical diffusion in a medium-range forecast model. Mon. Wea. Rev., 124,2322-2339 (1996). 2. D. R. Johnson and Z. Yuan: The development and initial tests of an atmospheric model based on a vertical coordinate with a smooth transition from terrain following to isentropic coordinates. Adv. Atmos. Sci., 15, 283-299 (1998). 3. H.-M. H. Juang: A reduce spectral transform for the NCEP seasonal forecast global spectral atmospheric model. Mon. Wea. Rev, 132, 1019-1035 (2004). 4. H.-M. H. Juang: Discrete generalized hybrid vertical coordinates by a mass, energy, and angular momentum conserving finite-difference scheme. NCEP Ofice Note, 455, 35pp (2005). 5. H.-M. H. Juang: The performance of hybrid sigma-theta NCEP GFS. Proceedings Conference on Weather Analysis and Forecasting, October 18-20, 2006, Central Weather Bureau, Taipei, Taiwan, p2.20-2.22 (2006). 6. H.-M. H. Juang and M. Kanamitsu: The computational performance of the NCEP Seasonal forecast model on Fujitsu VPP5000 at ECMWF.
Developments in Tera Computing, proceedings of the ninth ECMWF workshop on the use of high performance computing in Meteorology, Reading, UK, 13-17 November, 2000. (2001)
7. M. Kanamitsu: Description of the NMC global data assimilation and forecast system. Wea. Forecasting, 4,335-342 (1989). 8. M. Kanamistsu, J.C. Alpert, K.A. Campana, P.M. Caplan, D.G. Deaven, M. Iredell, B. Katz, H.-L. Pan, J. Sela, and G.H. White: Recent changes implemented into the global forecast system at NMC. Wea. Forecasting, 6, 425-435 (1991). 9. C.S. Konor and A. Arakawa: Design of an atmospheric model based on a generalized vertical coordinate. Mon. Wea. Rev., 125, 1649-1673 (1997). 10. H.-L. Pan and W.-S. Wu: Implementing a mass flux convection parameterization package for the NMC Medium-Range Forecast Model. NMC Office Note 409,40 pp. [Available from NCEPEMC 5200 Auth Road, Camp Springs, MD 20746.1 (1995). 11. W. Ohfuchi, T. Enomoto, K. Takaya, and M.K. Yoshioka: 10-km Mesh Global Atmospheric Simulation. Realizing Tera computing, proceedings of the tenth ECMWF workshop on the use of high performance computing in Meteorology, Reading, UK, 4-8 November, 2002. (2003) 12. J.-F. Estrade, Y. Tremolete, and J. Sela : Experiments with NCEP spectral model. Developments in Tera Computing, proceedings of the ninth ECMWF workshop on the use of high performance computing in Meteorology, Reading, UK, 13-17November, 2000. (2001) 13. D. Salmond : Computational efficiency of the ECMWF forecast system. Proceedings ofthe twelfth ECMWF workshop on the use of high peqormance computing in Meteorology, Reading, UK, 30 October-3 November, 2006. (2007). In this volume.
MULTISCALE SIMULATOR FOR THE GEOENVIR0NMENT:MSSG AND SIMULATIONS KEIKO TAKAHASHI* Earth Simulator Center, JAMSTEC 31 73-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan XINDONG PENG Earth Simulator Center, JAMSTEC 31 73-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan RYO ONISHI Earth Simulator Center, JAMSTEC 31 73-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan MITSURU OHDAIRA Earth Simulator Center, JAMSTEC 31 73-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan KOJI GOT0 NEC Corporation, 1-10 Nisshin-cho, Fuchu-shi, Tokyo, 183-5801 Japan HIROMITSU FUCHIGAMI NEC Informatec Systems LTD 3-2-1 Sakato, Takatsu-ku, Kawasaki-shi, Kanagawa, 213-0012 Japan TAKESHI SUGIMURA Earth Simulator Center, JAMSTEC 31 73-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan
'Corresponding author: Keiko Takahashi, 3173-25 Showa-machi, Kanazawa-ku, Yokohama 236-0001 Japan. E-mail: takahasi @jamstec.go.jp
36
37 MultiScale Simulator for the Geoenvironment (MSSG), which is a coupled nonhydrostatic atmosphere-ocean-land model, has been developed in the Earth simulator Center. Out line of MSSG is introduced and characteristics are presented. Computational performance analysis has been performed on the Earth Simulator. As the results of optimization, ultra high performance with MSSG achieved. Its computational performance on the Earth Simulator attained 52-55%of theoretical peak performance. In addition, results from preliminary validations including forecasting experiments are presented.
1. Introduction Intense research effort is focused on understanding the climate/weather system using coupled atmosphere-ocean models. It is widely accepted that the most powerful tools available for assessing future weathedclimate are fully coupled general circulation models. Not only interactions between atmosphere or ocean components, but also various components have been coupled with various interactive ways and influence on earth system. Getting further information on perspectives of future weathedclimate and the earth system, whole of the earth system should be simulated using coupled models as much as we can. The purpose of this paper is to introduce a MultiScale Simulator for the Geoenvironment (MSSG) which has been developed in Earth Simulator Center. MSSG is a coupled ocean-atmosphere model whjch is mainly composed by nonhydrostatic atmosphere, ocean, sea-ice and land model components. MSSG has been developed to be run on the Earth Simulator with ultra high resolution and highly optimized for the Earth Simulator. MSSG can be adopted for not only global simulations but regional simulations with one-way or two-way nesting schemes. In this paper, its computational performance on the Earth Simulator and preliminary simulation results for 72 hours forecasting with MSSG are presented. Outline of MSSG configuration is described in section 2. Computational performance of MSSG on the Earth Simulator and preliminary simulation results with MSSG are shown in section 3 and 4, respectively. Future work is described in section 5.
2.
MSSG Configuration
a. The atmospheric component: MSSG-A An atmospheric component of MSSG, which we call it MSSG-A, is the non-hydrostatic globalhegional atmosphere circulation model. MSSG-A is compromised of fully compressive flux form of dynamic Satomura (2003) and Smagorinsky-Lilly type parameterizations by Lilly (1962) and Smagorinsky
38
(1965), for sub-grid scale mixing, surface fluxes by Zhang (1982) and Blackadar (1979), cloud microphysics with mixed phases by Reisner (1998), cumulus convective processes by Kain (1993) and Fritsch (1980) and cloud-radiation scheme longwave and shortwave interactions with explicit cloud and clear-air. The set of the prognostic equations are presented as follows: I
q+ at
1
1
a(GiG13pu)+
I
G?acos p
an
1
a(G1GZ3cosppv)
Gia cos V,
av
I
1 a(pw*)
+,-=o, - aZ G2
(1)
1
1 at
ap' -+ at
G7a cos p
v
P = pRT,
a(G1GI3P')
aa
= -V
(puv) + 2f,pv
pvu tan p - 2fqpw +a -+FA, a
(Pv)+ ( y - 1)PV v = ( y - 1)V * ( N T )+ (Y- 1)@,
(5)
(6)
In equations (1)-(7), prognostic valuables are momentum pv = (pu, pv, pw), p' which is calculated as p' = p - p and P' defined by P' = P - p. p is the density; P is the pressure; Pis a constant reference pressure. f, L,L, K, and yare the Coriolis force, the viscosity coefficient, the diffusion coefficient, and the ratio of specific heat, respectively. F is the heat source term and the viscosity
39
term, respectively. G is the metric term for vertical coordinate; ?, is latitude; cp is longitude. The treatment of cloud and precipitation is controlled by selecting a parameterization scheme due to horizontal resolution. For grid spacing greater than 10 km, Kain and Fritsch scheme in Kain (1993) and Fritsch (1980) is used and cloud micro physics based on mixed phase micro cloud physics in Reisner (1998) is used for below 5km spacing. Over land, the ground temperature and ground moisture are computed by using a bucket model as a simplified land model. As upper boundary condition, Rayleigh friction layer is set.
b. The ocean component: MSSG-0 In the ocean component, which we call it MSSG-0, the in-compressive and hydrostatichon-hydrostatic equations with the Boussinesq approximation are introduced based on describing in Marshall (1997a) and Marshall (1997b). Before starting experiments, non-hydrostatic or hydrostatic configuration has to be selected due to horizontal resolution. In this paper, we describe a formulation of hydrostatic version. In addition, either explicit free surface solver or rigid lid solver is available as one of options. The set of hydrostatic equations in the ocean component becomes as follows,
ac
- = -vgradc
+ F,
aT = -vgradT
+ FT
at
at
au
- = -vgradu
at
+ 2f,v
(9)
vutanq wu - 2f p w + - -r
r
1 ap' -+F2 p,rcosq d R
(11)
40
dw = -vgrudw+ 2 fp- 2 fnv + uu vv 1 ap' +------g+
pt
PO ar
PO
at
r
F,
where the Boussinesq approximation is adopted in (9) and all variables are defined as above for the atmospheric component. In equation (14), UNESCO scheme in Gill (1982) is used. Smagorinsky type scheme by Lilly (1962) and Smagorinsky (1965) is used as the sub-grid scale mixing in identical experiments in an ocean component. The level-2 turbulence closure of Mellor Yamada in Mellor Yamada in Mellor (1974) has been also introduced to the ocean component as one of optional schemes. In the ocean component, sponge layers are used for lateral boundary in the open ocean. The lateral boundary condition between ocean and land is defined as aTlat = astat = 0 and v = 0. Bottom condition is defined by Neumann condition without vertical velocity. The upper boundary conditions are given as momentum fluxes by wind and heat fluxes from observational data of atmosphere. c. Differencing Schemes and implementation
Yin-Yang grid system presented in Kageyama (2004) is used both for global atmosphere-land and ocean components. Yin-Yang gird system as shown Fig. 1 is characterized by overlapped two three degree panels to cover a sphere. One component grid panel is defined as a part of low-latitude region covered between 45N and 45s and 270 in longitude of the usual latitude-longitude grid system. The other component panel is defined in the same way but by a coordinate with 90 degree rotation. The region covered by interface of the panels can be changed by rotating axes of those panels. Conservation scheme was discussed in Peng (2006) and no side effects of over lapped grid system such as Yin-Yang grid were presented due to validations results of various benchmark experiments in Komine (2005), Ohdaira (2004) Takahashi (2004a,b) and Takahashi (2005).
41
Fig. 1. Yin-Yang grid system which is composed of two panels on the sphere.
In both the atmospheric and ocean components, Arakawa C grid is used. The atmospheric component utilizes the terrain following vertical coordinate with Lorenz type variables distribution in Gal-Chen (1975). The ocean component uses the z-coordinate system for the vertical direction. In discritization of time, the Yd, 31d and 4'h Runge-Kutta schemes and leap-flog schemes with Robert-Asselin time filter are available. The 3d Runge-Kutta scheme presented in Wicker (2002) is adopted for the atmosphere component. In the ocean component, leap-flog schemes with Robert-Asselin time filter is used for the ocean component. For momentum and tracer advection computations, several discritization schemes presented in Peng (2004) are available. In this study, the 5' order upwind scheme is used for the atmosphere and central difference is utilized in the ocean Component. The vertical speed of sound in the atmosphere is dominant comparing horizontal speed, because vertical discritization is tend to be finer than horizontal discritization. From those reasons, horizontally explicit vertical implicit (HEW) scheme in Durran (1991) is adopted in the atmosphere component. The speed of sound in the ocean is three times faster than it in the atmosphere, implicit method is introduced and Poisson equation (1 4) is solved in the method. Poisson equation is described as V gmdP' = B,
which are solved under Neumann boundary condition of n g m d P
I=
n e 4:
I
42
Algebraic Multi-Grid (AMG) method in Stuben (1999) is, used in order to solve a Poisson equation. AMG is well known as an optimal solution method. We used the AMG library based on aggregation-type AMG in Davies (1976), which has been developed by Fuji Research Institute Corporation. d. Coupling between the atmosphere and ocean components Coupling interface between atmosphere and ocean components, MSSG-A and MSSG-0 respectively should be taken into account to maintain a self consistent representation in the coupled model. Generally, time step in the ocean component is set longer than it of atmosphere component. The heat fluxes, moisture and momentum fluxes are computed and averaged over larger time step. The averaged fluxes are used as the upper boundary condition of MSSG-0. Precipitation computed in MSSG-A is transferred to the ocean as a source term as fresh water. Sea surface temperature is defined in the most upper layer in the ocean component and is transferred to the atmospheric component as one of heat source The SST is fixed in the atmosphere during all time steps within any large step. e. MSSG as a Multi-Scale Coupled Model with nesting schemes MSSG is available for the hierarchy of broad range of space and time scales of weatherklimate phenomena as follows, Global non-hydrostatic atmospheric circulation model: Global MSSG-A Regional non-hydrostatic atmospheric model: Regional MSSG-A, Global non-hydrostatic/hydrostatic ocean model: Global MSSG-0, Regional non-hydrostatic/hydrostatic ocean model: Regional MSSG-0, Coupled Global MSSG-A Global MSSG-0: MSSG, Coupled Regional MSSG-A Regional MSSG-0: Regional MSSG and MSSG with Regional MSSG using nesting schemes. Regional version of MSSG-A, MSSG-0 and MSSG are utilized with one-way or two-way nesting schemes. Any regions from the global can be defined for regional version models, because both Coriolis and metric terms have been introduced in the regional formulation.
43
3. Computational Performance of MSSG on the Earth Simulator a. Distribution architecture and communications The architecture and data structures are based on domain decomposition methods. In Yin-Yang grid system, communication cost imbalance might occur by adopting one dimensional decomposition. The case of decomposition with 16 processes is considered in Fig.2. Each color is corresponding to each process. The number of arrows linking between different colored areas is corresponding to a mount of communication between processes. For example, in Fig.:! (a) for one dimensional domain decomposition, black colored process called A should communicate to different colored 8 processes. In Fig.:! (b) for two dimensional decomposition, a black colored process called A communicates two processes. In Fig.:! (a), communication data size is small, in addition, the number of communications is increased. When the same number for decomposition is defined in both (a) and (b), it is clear that less amount of communication realizes in Fig.2 (b). Two dimensional decomposition was adopted for both atmosphereland and ocean components due to these reasons.
c
(a) One dimensional domain decomposition
-
(b) Two dimensional domain decomposition
Fig. 2. Schematic features of domain decomposition on Yin-Yang grid system.
6. Inter-htra-node parallel architectures and vector processing Since effective parallelization and vectorization contribute to achieve high performance, the three-level parallelism which are inter-node parallel processing for distributed memory architecture, and intra-node parallel processing for shared memory architecture, and vector processing for a single processor should be utilized in order to pursuit high computational performance. Two dimensional domain decomposition was adopted to archive high performance communication. MPI-based parallelism for inter-node is used to communicate among decomposed domains.
44
0
SO0
iOOU 1500 Loon L e n d
ZWV
25
Q
$00
1VOO
1500
2000
2500
Lc5. Len&
withdifferentt horizontal horizontal Fig. 3. Performancefor double and single DO loop structure with S. resolutions.(a) and (b) show Mflops and elapsed time increasing resoluations.
Microtasking of intra-node operations for shared memory architecture contributes to significant high performance, when long vector length is selected for the parallelism at a DO loop level. In order to equally share the computational load by 8 threads of microtasking, it is simple way that each microtask is mapped onto a processor and vertical layers and latitudinal grid points are parallelized with microtasking architectures. Therefore, it is necessary to high computational performance that the number of vertical layers and latitudinal grid points are required to be a multiplier of 8 to archive higher performance computation. In the cases in this paper, 32 and 40 vertical layers have been selected for the atmosphere and the ocean components, respectively. When two dimensional domain decomposition is used for inter-node parallelization, vector length and length of DO loops should be taken into account to be fully utilized. In this paper, two approaches in DO loops are considered in order to keep the length of DO loops. The first step to keep the length of DO loops is that both latitude and longitude direction are selected as an axis of DO loops. When the first approach is chosen, double looping structure is adopted. The second approach is that single DO looping structure is used by combining both looping axes of longitude and latitude direction. Fig.3 shows preliminary results from computations of dynamical core with double looping and single looping structures. When single looping structure is adopted, array structures should be taken in account order to access grid point in overlapped regions of Yin-Yang grid system. In Fig.3, lloop-list, lloop-nolist and lloop-metric present implementation architectures with list structure,
45
without list structure and list structure excepting metric terms, respectively. 2loops shows results of cost performance with double DO looping structures. Single DO looping structure without list structures to access grid points shows best performance as shown in Fig.3. However, increasing length of DO loop, the discrepancy between double and single DO looping structure is getting small. Ultra high resolutions over 400 of loop length, which is corresponding to higher resolution than 25km for global, are required in our simulations. Therefore, we adopted double DO looping structure, because fully high performance is expected as the same level performance of single DO looping structure and simplified coding styles are able to use in double DO looping structure. c. Pegormance and scalability
Since the developed coupled ocean-atmosphere model can be run under various conditions, several cases are selected up in order to measure computational performance as follows. CASE1: Regional MSSG with 1.5km resolution in horizontal direction and 72 vertical layers for Japan region. CASE2: Global MSSG-A with 2.26km horizontal resolution and 32 vertical 1ayers. CASE3: Regional MSSG-0 with 1.4 km resolution in horizontal direction and 40 vertical layers for the North Pacific basin and region between the equator and 30"s. In all cases of CASES of 1 and 2, cloud microphysics is used for handling both non-convective and convective clouds and precipitation in the atmospheric component including the coupled model. The atmospheric component of the developed coupled model performs as a cloud resolving model. In CASE1, horizontal resolution is decided by the reason that non-hydrostatic physical phenomena in atmospheric component such as rain band is well represented with horizontal resolution less than 5km. In CASE2, horizontal resolution and the number of vertical layers are decided by the restriction of total memory size of the Earth Simulator used by global simulation with the developed stand-alone atmospheric model. The Japan region used in CASE1 is defined referring to in hazard region for 72 hours forecasting of typhoons tracking used by Japan Meteorological Agency.
46 Table 1. Computational performance on the Earth Simulator.
CASE identifies the name of the above cases; TPN is the total number of nodes; TAP is the total number of arithmetic processors; grid pts is the number of grid points in the each CASE; MflopslAP is the corresponding megaflops per an arithmetic processor; Vector length is averaged length of vector processing; V. OP ratio is the vector operation ratio; Tflops is the total telaflops sustained over the duration, exclusive of I/O; Peak ratio is the percentage of total telaflops to the theoretical peak performance; Parallelization ratio, Parallel efficiency and Speedup are measured by degradation in elapse time relative to single arithmetic processor, respectively.
3 I +
CASE TPN TAP 512 1
384
-256
512
2
384 256
L
498
-
gridpts
Mflops’ AP
Vector
V . 0 P Tflops
Length ratio
4166.7
229
99.3%
3072 3,662,807,040 4273.8
229
99.3%
4401.9
ElE
Peak ratio
Para’1e1ization ratio
229 199.3%
9.02
I 55.0% I
-
228 199.5%
9.61
I 58.7% I
.
Parallel Speed efficiency up
3072 2,713,190,400 4606.1
96.7%
I 247.5 I
398
207
In CASE 3, region used for standalone oceanic component simulations is defined as enough area which is not influenced by boundary conditions taking account for regional coupled ocean-atmosphere-land simulation such as CASEl . The horizontal resolution in CASE3 is decided as maximum resolution for representing hydrostatic phenomena in the ocean taking account to matching to the horizontal resolution of CASEl. Earth Simulator users can use the performance analysis tool FTRACE (Flow TRACE) which is a built-in counter in the Earth Simulator. By using FTRACE, we can obtain the data such that the number of floating-point operations and vector instructions, clock counts, averaged vector loop length and delay time due to out-of-cache operations. We use this tool to measure the computational performance of each CASEs. Especially, the flops values of the all CASES are determined on the basis of the performance information output for each MPI process. Each flop value can be derived as the total number of floating point operations for all the processors divided by the maximum elapsed time. Computational performances for ail CASEs are shown in Table 1.
47
Table 1 is shown for each CASE with various processor configurations ranging from 256 nodes by 8 intra-node processors to 512 nodes by 8 processors. The coupled non-hydrostatic atmosphere-ocean-land model has achieved a very excellent overall sustained performance of 17.07 Tflops, which is 52.1% of the theoretical peak performance on 512 nodes of the Earth Simulator. For both of stand alone non-hydrostatic atmosphere and ocean components have attained well performance on the Earth Simulator. Especially, in global simulation with stand alone non-hydrostatic atmosphere component shows that sustained performance presents 18.74 Tflops, which is 57.2% of the theoretical peak performance on 512 nodes. This result shows that the well sustained performance is obtained not only for the coupled model but also both stand alone atmosphere and ocean components on wide range of system configurations of the Earth Simulator.
4.
Simulation Results with MSSG
a. Simulation results with MSSG-A
Global simulation has been performed to validate physical performance under the condition of 5.5 km horizontal resolution and 32 vertical layers. 72 hours integration was executed with the atmospheric component. Initialized data was interpolated at 00UTC08Aug2003 from Grid Point Value (GPV) data provided by Japan Meteorological Business Support Center. Sea surface data was also made by GPV data at 00UTC08Aug2003 and fixed during the simulation. Precipitation distribution for global is presented in Fig.4. Fig. 4 shows averaged precipitation every nine horns before OOUTCl lAug2003. The unit is mm per hour. Precipitation distribution has been brought by working of cloud microphysics and is comparable to observation data. Regional validation has been performed with one way nesting from 5.5 km global simulation. Horizontal resolution and vertical layers were set the same condition as it of CASE 3. Initialized data was interpolated by using GPV data at 00UTC08Aug2003 provided by Japan Meteorological Business Support Center Boundary condition was made by interpolation the above simulation with 5.5km horizontal resolution. Sea surface temperature was also fixed to data at 00UTC08Aug2003 during the simulation. 72-hours integration has been performed. Fig.5 shows the result after 72 hours integration. Meso-y scale disturbance such as rain band might be captured in the simulations.
48
..-
Fig. 4. Global precipitation distribution (mmh) every six ours obtained by validation experiments:
(A)-(Hh
Fig. 5. Regional validation results with the atmospheric component. Colored distribution shows the precipitation (mmihour).
49
b. Simulation results with regional MSSG-0 In order to validate a stand alone oceanic components, MSSG-0, 15-years integration with llkm horizontal resolution and 40 vertical layers has been executed for the North Pacific basin and region between the equator and 30"s in CASE4 Surface heat fluxes and boundary data are computed from climatological data provided by World Ocean Atlas (WOA). Momentum fluxes are obtained by interpolating from climatological data by NCAR. Fig.6 show the
. . .
.. .
. .. . . .
. . . . . ...
. . . . .
.. .
..
I
Fig. 6. Snap shot results from regional simulations with the ocean component after 15 years integration. (a): sea surface-temperature ("C) at 15m depth from surface. (b): absolute value distribution of horizontal velocity (m/sec) at 105 m depth. Color contour in (b) is used referring to the color bar in http:ecco.jpl.nasa.gov/cube-shere
50
snap shot in April after 15 years integration. Fig. 6 shows temperature distribution at 15 m depth from the surface, which is corresponding to the second layer from the surface. Fig.6 (b) shows distribution of absolute value of horizontal velocity at 105m depth. Eddy resolved distributions have been recognized in both Fig. 6 (a) and (b). c. Simulation validation of regional MSSG
MSSG was tested by 120 hours forecasting experiments for typhoon tracking of ETAU during the term from 15UTC06Aug2003 to 15UTCllAug2003. 120 hours forecasting has performed by using boundary condition with results of forecasting with global MSSG-A. In the region surrounded by 19.582"N 49.443"N, 123.082"E - 152.943"E, MSSG-A and MSSG-0 are coupled with 2.78 km for horizontal. For vertical layers, 32 layers and 44 layers were used for the atmosphere and ocean components, respectively. Initial data of atmospheric was also given by GPV data with interpolation at 15UTC06Aug2003 for the atmosphere. For the ocean component, further 24 hours integration has been performed. After these global atmospheric simulations with 5.5 km horizontal resolution and 32 vertical layers, regional boundary data for Japanese region has been obtained from GPV data. In the oceanic component, initial data at forecast beginning date of 15UTC06Aug2003 was obtained by doing 10 days spin up integration from 27'h July in 2003 based on July climatological data of previous 15 years integration. During the 10 days spin up integration, surface boundary data is given by 6hourly data by NCAR. Outside of the focused Japan region, global atmosphere simulation with 5.5 km horizontal resolution performed and its results was used as lateral boundary condition of the atmosphere component in Japan region. Lateral condition of the ocean component in Japan region was given by results of climatological data from previous 15 years integration. Coupling was done without any flux correction. Fig. 7 (A)-(F) shows time series results of 1 hour averaged forecasting with the coupled model in Japan region. Distribution of blue gradation shows precipitation distribution. Fine structure like a rain band has been represented in (A). Those distribution structure showed drastic change as a typhoon attacked Japan and went through Japan. In the ocean, SST response to a typhoon was simulated in (B)-(F). Oscillation due to disturbance of a typhoon was recognized in not only SST but also vertical velocity and Kuroshio. Detailed analysis on eye core structure or ocean responses is still going on.
51
Fig. 7. Precipitation distribution (mmh), wind velocity with black allow and SST distribution during typhoon ETAU attacked Japan region. L.eft-hand side color bar shows volume of precipitation and right-hand side color bar presents SST temperature.
Fig. 8. Results from tracking forecast of typhoon ETAU. Best tracking announced by Japan Meteorological Agency (black line) and results from 120 hours forecasting simulation (red line). Here a part of simulation results are shown in the above limited region.
52
The non-hydrostatic atmosphere-land component was coupled to the ocean component with 2.78km horizontal resolution and 76 vertical layers in order to tracking forecast of typhoon ETAU. Those experiments have been performed for 120 hours (5 days) without fluxes correction. Results are presented in Fig.8. Observation data presented (a) in F i g 3 is opened it to the public as ‘best tracking’ by Japan Meteorological Agency. Results from tracking forecasts of ETAU are comparable with other forecasting results.
5.
Conclusion and Near Future Work
The development of MSSG, which is a coupled non-hydrostatic atmosphereocean general circulation model is successfully completed with the high sustained performance of 17.07 Tflops which is corresponding to 52.1% of peak performance with the full utilization of 512 nodes of the Earth Simulator. The each components of MSSG showed high sustained performance as well as MSSG. These results encourage us to start various simulations with the coupled non-hydrostatic atmosphere-ocean-land general circulation model. As challenging issues, more various forecasting experiments is going to be performed and more longer integration will be executed in order to estimate the accuracy of forecasting. In addition, preliminary results of high resolution simulations have shown with some impact. Especially, it was clear that process of precipitation was sensitive to high resolution, as previous studies pointed out. In the coupled model simulations, fine structure and detail processes were represented in both atmosphere and ocean. Further detail analysis is required. Various forecast simulation of forecasting such as heavy rain and other cases of typhoon, longer integration with the coupled model will be performed in near future. Those challenges might be one of the ways to understanding mechanisms of weather or longer multi-scale meteorological phenomena.
Acknowledgments This study has been performed as a part of Collaboration Project 2006 in Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology (JAMSTEC). This work is also partially supported by CREST project of Japan Science and Technology Agency (JST).
References Blackadar, A. K., 1979: High resolution models of the planetary boundary layer. Advances in Environmental Science and Engineering, I, Pfafflin and Ziegler, Eds., Gordon and Breach Pub]. Group, Newark, 50-85. Davies, H. C., 1976: A lateral boundary formulation for multi-level prediction models, Quart. J. R. Met. SOC.,102, 405-418. Durran, D., 1991: Numerical methods for wave equations in Geophysical Flud Dynamics, Springer. Fritsch, J. M., and Chappell, C. F., 1980: Numerical prediction of convectively driven mesoscale pressure systems, Part I: Convective parameterization, J. Atmos. Sci., 37, 1722-1733. Gal-Chen, T. and Somerville, R. C. J., 1975: On the use of a coordinate transformation for the solution of the Navier-Stokes equations. Journal of Computational Physics, 17, 209-228. Gill, A., 1982: Atmosphere-Ocean dynamics, Academic Press Inc.. Kageyama, A. and Sato, T., 2004: The Yin-Yang Grid: An Overset Grid in Spherical Geometry. Geochem.Geophys.Geosyst., 5, 409005, doi: 10.1029/2004GC000734. Kain, J. S. and Fritsch, J. M., 1993: Convective parameterization for mesoscale models: The Kain-Fritsch Scheme. The Representation of Cumulus Convection in Numerical Models of the Atmosphre, Meteor. Monogr., 46, Amer. Meteor. SOC.,165-170. Komine, K., Takahashi, K., et al., 2004: Development of a global nonhydrostatic simulation code using Yin-Yang grid system, Proc. The 2004 Workshop on the Solution of Partial Differential Equations on the Sphere, 67-69, http://www.jamstec.go.jp/frcgc/eng/ workshop/pde2004 /pde2004-2/Poster/54 poster-ohdaira.zip. Lilly, D. K., 1962: On the numerical simulation of buoyant convection. Tellus, 14, 148-172. Marshall, J., Hill, C., Perelman, L. and Adcroft, A., 1997a: Hydrostatic, quasihydrostatic, and nonhydrostatic ocean modeling. Journal of Geophysical Research, 102,5733-5752. Marshall, J., Adcroft, A,, Hill, C., Perelman, L. and Heisey, C., 1997b: A finitevolume, incompressible Navier-Stokes model for studies of the ocean on parallel computers. Journal of Geophysical Research, 102,5753-5766. Mellor, G. L. and Yamada, T., 1974, A hierarchy of turbulence closure models for planetary boundary layers. Journal of Atmospheric Sciences, 31, 1791-1806.
54
Ohdaira, M., Takahashi, K. et al, 2004: Validation for the Solution of Shallow Water Equations in Spherical Geometry with Overset Grid System” in Spherical Geometry. Proc. The 2004 Workshop on the Solution of Partial Differential Equations on the Sphere, http://www.jamstec.go.jp/frcgc/eng/ workshop/pde2004 /pde2004-2/Poster/56 poster-ohdairazip. Peng, X., Xiao, F., Takahashi, K. and Yabe T., 2004: CIP transport in meteorological models. JSME international Journal (Series B), 47(4), 725-734. Peng, X.,Takahashi, K., Xiao, F., 2006: Conservative constraint for a quasiuniform overset grid on sphere, the Quarterly Journal. Reisner, J., Ramussen R. J., and Bruintjes, R. T., 1998: Explicit forecasting of supercoolez liquid water in winter storms using the MM5 mesoscale model. Quart. J. Roy. Meteor. SOC., Satomura, T. and Akiba, S . , 2003: Development of high- precision nonhydrostatic atmospheric model (1): Governing equations, Annuals of Disas. Prev. Res. Inst., Kyoto Univ., 46B, 331-336. Smagorinsky, J. , Manabe, S . and Holloway, J. L. Jr., 1965: Numerical results from a nine level general circulation model of the atmosphere. , Monthly Weather Review, 93, 727-768. Stuben, K., 1999: A Review of Algebraic Multigrid, GMD Report 96. Takahashi, K. et al., 2004: Proc. 7th International Conference on High Performance Computing and Grid in Asia Pacific Region, 487-495. Takahashi, K., et al., 2004. “Non-hydrostatic Atmospheric GCM Development and its computational performance”, http://www .ecmwf .int/newsevents/meetings/workshops/2004/ high-performance- computing- 11th lpresentations.htm1 Takahashi, K, et al. 2005: Non-hydrostatic atmospheric GCM development and its computational performance“, Use of High Performance computing in meteorology, Walter Zwieflhofer and George Mozdzynski Eds., World Scientific, 50-62. Wicker, L. J. and Skamarock, W.C., 2002: Time-splitting methods for elastic models using forward time schemes, Monthly Weather Review, 130, 2088-2097. Zhang, D. and Anthes, R.A., 1982: A High-Resolution Model of the Planetary Boundary Layer - Sensitivity Tests and Comparisons with SESAME-79 Data. Journal of Applied Meteorology, 21, 1594-1609.
HPC ACTIVITIES IN THE EARTH SYSTEM RESEARCH LABORATORY M. GOVE7T1,J. MIDDLECOFf’, D. SCHAFFER’ and J. SMITH’ National Oceanic and Atmospheric Administration Earth System Research Laboratory Global Systems Division Boulder, Colorado 80305, U.S.A. ESRL‘s Advanced Computing Section mission is to enable new advancements in atmospheric and oceanic sciences by making modem high performance computers easier for researchers to use. Active areas of research include (1) the development of software to simplify programming, portability and performance of atmospheric and oceanic models that run on distributed or shared memory environments, (2) the development of software that explores and utilizes grid computing, and (3) the collaboration with researchers in the continued development of next generation weather forecast models for use in scientific studies or operational environments. This paper will describe two activities our group is engaged in: the integration of parallel debugging capabilities into the Weather Research and Forecasting Model (WRF), and the development of a modeling portal called WRF Portal.
1 Introduction As NOAAs technology transfer laboratory, ESRL and its predecessor, the Forecast Systems Laboratory (FSL), has long recognized the importance of High Performance Computer (HPC) technologies. The laboratory purchased and used a 208-node Intel Paragon in 1992 for producing weather forecasts in real time using the 60-km version of the Rapid Update Cycle (RUC) model [3]. This was the first demonstration of operational forecasts produced in real time using a distributed memory HPC. Since then, ESRL continues to purchase and utilize HPC resources to support its modeling activities at NOAA. ESRL recently purchased a 1424 processor machine (64 bit Woodcrest chip), which complements two other HPC systems at the facility: a 1500 processor system (32 bit Xeon chip), and a 600 processor system (64 bit Xeon chip). The ESRL computing facility is available to research groups and projects across NOAA;it supports over 200 projects both within ESRL and NOAA and their collaborators ( http://hpcs.~sI,noaa.~ov/c~i-bin/applications/) . A diverse mix of applications are run at the ESRL facility that require from 1 to over 500 processors, and result in over 10,000batch jobs being processed per day.
’ E-mail address:
[email protected] Cooperative Institute for Research in the Atmosphere, Colorado State University, Fort Collins, CO 80523 USA 55
56
The Advanced Computing Section (ACS) was formed to support users of the computing facility. The group initially developed software, called the Scalable Modeling System (SMS), to help parallelize shared memory Fortran (vector) codes for use on the laboratory’s distributed memory HPC. The ACS continues to provide traditional HPC services including code parallelization, and model development, but also does exploratory development in other HPC related areas such as grid computing [2], and web services. This paper will focus on two areas of development within the ACS. Section 2 will discuss the integration of debugging capabilities from ESRL’s Scalable Modeling System (SMS) into the Weather Research and Forecast (WRF) Model. Section 3 will describe the development of WRF Portal, a Java-based portal designed to support model development activities at ESRL.
2 Run-time Parallel Debugging Capabilities for WRF The ACS has been working to link powerful debugging capabilities provided by SMS into the WRF. This effort will make it easier to find parallel bugs and for modelers to maintain and develop the WRF code.
2.1 Background Finding run-time bugs during the initial code parallelization or ensuing code maintenance phase can be the most difficult and time consuming task required in running codes on a HPC system. Modelers with scientific background typically maintain and develop the code used in the parallel models. They have experience and knowledge in understanding the scientific basis of their work but are not typically trained in parallel programming, nor do they want to spend their time trying to find parallel bugs in their code. Typically to find parallel bugs, users must first determine where the code is failing, and then begin writing output statements in their code to trace to problem back to its source. Even the most adept parallel programmer can spend days or weeks finding a single parallel bug. Providing a way to easily test changes that are made to check for parallel bugs will ensure the long-term viability and stability of development and future code releases. Commercial parallel debugging tools are available but are not often used on distributed memory HPCs because (1) they are not sufficiently mature and bug free, (2) they require the user to learn and understand complex software necessary to display and diagnose run-time bugs, (3) they are unwieldy in situations where lots of CPUs are being used and the problem may be hard to trace using a step-by-step process. Interactive debuggers can also be time consuming to use and may fail in large codes where high memory usage is required. As a result, debuggers are mostly used by computer specialists, not the
57
users or developers of the code. Thus modelers usually rely on print statements, and the tedious, time-consuming tracing of bugs that is done by repeatedly running the code and analyzing the results.
2.2 WRF and SMS WRF is a next-generation mesoscale numerical weather prediction system designed to serve both operational forecasting and atmospheric research needs. It features multiple dynamical cores, a 3-dimensional variational (3DVAR) data assimilation system, and a software architecture that allows for computational parallelism and system extensibility. WRF is suitable for a broad spectrum of applications across scales ranging from meters to thousands of kilometers. Developing WRF has been a collaborative effort, principally among the National Center for Atmospheric Research (NCAR), NOAA (the National Centers for Environmental Prediction (NCEP) and the Earth System Research Laboratory (ESRL), the Air Force Weather Agency (AFWA), the Naval Research Laboratory, Oklahoma University, and the Federal Aviation Administration (FAA). Two versions of WRF are being used operationally at NCEP (ARW and NMM), but there are many other variants being used or that are under development including atmospheric chemistry applications (WRFCHEM), hurricane WRF, coupled Ocean-Atmosphere WRF, and WRF as part of the NCEP Ensemble Modeling System. In order to support these development efforts and changing operational and research requirements, WRF is constantly being modified. SMS, developed by ESRL, is a modeling framework that has been used to parallelize many ocean and atmospheric models over the last ten years including the Eta and RUC models running operationally at the National Centers for Environmental Prediction (NCEP), and the HYCOM and ROMS ocean models [ 11. SMS is composed of both a parallelizing compiler, and a run-time library to support operations including data decomposition, communication, VO operations, global reductions, and halo exchanges.
To parallelize a code using SMS, the programmer inserts directives, which appear as comments, directly into the code. The Parallel PreProcessor (PPP), a component of SMS, translates the directives and serial code into a parallel version. Since the programmer adds only comments to the code, there is no impact to the original version; it can run without modification. Figure 1 illustrates this process.
58
Original Code
4
’ I Parallel Executable
Figure 1: An illustration of the process used to parallelize Fortran codes. SMS directives are inserted into the code appearing as Fortran comments. These directives are then translated into parallel code by the SMS code translator and then compiled on the target parallel machine.
2.3 SMS Debugging Directives SMS supports parallelization through the use of 15 directives; two of these directives are used to support parallel debugging [ 11. As illustrated in Figure 2, the COMPARE-VAR directive is used to verify that interior region data points are correct, and CHECK-HALO is used similarly for the halo points. Interior points contain data that is local to each processor; halo regions are used to store interior points that are “owned” by the neighboring processor. Inter-process communication, or message passing, updates these “halo” points before the local process needs them to perform calculations. Figure 3 illustrates how the CHECK-HALO directive works: halo region values from each user-specified array are compared with their corresponding interior points on the neighboring process. When data values differ, SMS outputs an error message containing the array name, and the location where the problem occurred, and then terminates execution.
SMS Debugging Directives Insert directives in the code to verify array values are correct portion of a decomposed array
Halo Region
-
Interior Region -
compare-var
Figure 2: An illustration of two debugging directives that are available to verify correctness of decomposed array values. Scalars and non-decomposed arrays can also be compared. These directives have greatly simplified debugging and parallel code development.
CHECK-HALO directive
0 interiorregion data halo region data
Clobnllndices
f
2
3
4
3 4 9 6 7
Figure 3: This SMS directive is used to verify that each processor’s halo region is up to date. In this example, process P2 compares data one step into its left halo (global index 3) with the corresponding interior points on process P1. Similarly, the right halo region points (global index 7) are compared to the interior points on P3. Similar comparisons are made on processors P1 and P3 where appropriate.
The COMPARE-VAR directive provides the ability to compare array or scalar values between a correctly working code and another run that uses different numbers of processors. For example, the programmer can specify a comparison 59
60
of the array "x", for a single processor run and for a multiple processor run by inserting the directive:
csms$compare-var
(
x
)
in the code and then entering appropriate command line arguments to request concurrent execution of the code. Wherever COMPARE-VAR directives appear in the code, user-specified arrays will be compared as shown in Figure 4. If differences are detected, SMS will display the name of the variable, the array location (e.g., the i, j, k index) and values from each run, the location in the code, and then terminate execution. Conversely, if no differences are found, SMS will continue executing the code.
COMPARE-VAR directive One P~oce88Exec
SMS Runtime Environment
Four Process Exec
Figure 4: An illustration of how COMPARE-VAR is used. In this example, two executables are launched concurrently from the command line. When a COMPARE-VAR directive is encountered, the executables synchronize, and then compare the specified arrays. If any elements of the arrays differ, SMS will print the location and values of the data point and then terminate the execution of the runs.
The ability to compare intermediate model values anywhere in the code has proven to be a powerful debugging tool during code parallelization. For example, the time required to debug a recent code parallelization was reduced from an estimated eight weeks down to two simply because the programmer did not have to spend inordinate amounts of time determining where parallelization mistakes were made. SMS debugging directives have also proven to be a useful way to ensure that model upgrades continue to produce the correct results. For example, a scientist can verify source code changes by simply comparing the output files of the
61
serial “control” run and subsequent parallel runs. In the event results differ, they can turn on SMS debugging (a run-time option) to compare the intermediate results of the arrays specified by COMPARE-VAR. In the event differences are found, the user can quickly locate the problem and determine the best solution. In this way, SMS users have found the debugging directives to be very useful because they allow the code author to control the maintenance and upgrades of their parallel codes rather than requiring the help of a computer specialist.
3 Development of WRF Portal 3.1 Background NOAA depends on its environmental models to understand and predict the earth system. To improve forecast accuracy, modelers increase model resolution, input more diverse high-density data sets, and increase the complexity of the model. Model development requires testing to ensure improved accuracy; testing the models has always been an integral part of model development but as the models become more complex, the need to test them more systematically increases. Systematic testing and careful analysis of results should be a regular part of model development but is not normally done because it is difficult to do; it requires coordinated access to data and computing resources, and many staff hours spent editing configuration scripts, monitoring runs, checking results, and handling contingencies.
To obtain statistically reliable results, many model runs need to be made requiring huge amounts of data and access to significant HPC resources. Due to its inherent complexity, testing requires significant human effort. A single model evaluation can require hundreds to thousands of tests run over four seasons and multiple model domains in order to obtain accurate quantitative results. Few tools exist to support systematic test and evaluation of model performance, which inhibits the development and use of next generation forecast models. To simplify model test and evaluation, ESRL has developed an application, called WRF Portal (www.wrftwrtaI.org), that is designed to compose workflows, make model runs, monitor the workflows while they are running, and inform the user when they have completed or errors have occurred. WRF Portal was developed to support the mission of the Developmental Testbed Center (DTC), a joint NOAA and NCAR facility, to provide an environment in which new modeling research and development can be conducted, and promising developments can be moved into the N W S operations (www.dtcenter.orp). Typical testing scenarios at the DTC involve several model variants containing different run-time settings; each configuration requires hundreds of tests conducted for each model configuration that includes four 30-
62
day seasonal tests. Sensitivity tests might evaluate model performance over multiple model domains, varying resolutions, with differing dynamics or physical settings. Given the types of tests and the volume of runs that must be managed, WRF Portal has proven instrumental in helping modelers manage configurations, make model runs, monitor their execution, and view results. 3.2 WRF Portal
WRF Portal contains three main functions: configuration, run-time management, and analysis of results. Configuration is used to define the set of tasks and associated executables. Two windows, as illustrated in Figure 5, are used to prepare a workflow for running. The Model Configuration window (left window) is used to define the tasks that will be run, set run-time environment variables, and edit the run scripts, namelists, and other configuration files. The Run Configuration window (right window) defines the workflow used, sets the dates over which the workflow will be run, identifies the system where it will be run and defines run-time characteristics including number of processors, maximum time permitted to run, and the number of times the task should be retried. Once the user launches the workflow(s), directories are created, configuration files are copied and a server-side workflow manager is invoked to manage the run. Secure SHell (SSH) is used to authenticate to the HPC system so workflows can be run on the remote system. Since process execution must be reliable, significant work has been done on the workflow manager that manages execution on the remote machine. Once the workflow is running, a monitor window will appear so users can track its run-time execution of each task in the workflow. Figure 6 illustrates the monitoring capabilities of WRF Portal for the workflow named phlr26 that is running on ijet.fsl.noaa.gov. The upper part of the screen lists different workflows that have been or are being run and their current status. The lower screen shows a detailed listing of the selected run (the selected configuration is called runjh11-26). In the detailed screen, each task in the workflow is listed along with its current status, run-time and other system specific information; the information is updated every minute. In the event an error occurs, users can view the error and log files on the remote system from the monitor window. After processing is complete, users can perform analysis of results using graphical display capabilities available from the Portal, or compare differences between model configurations to understand differences in model results.
63
1 .
Figure 5: WRF Portal Configuration windows are used to define, configure, and prepare a workflow for running. In this example, the user is requesting the model configuration "phlr26" be run on ijer for a single date "2006-04-01 12 UTC" where 100 processors will be used for the WRF model run.
Figure 6: WRF Portal monitor window is used to track the progress of workflows, obtain detailed task status information, and navigate to directories and view files on the remote system.
64
3.3 Future Work
WRF Portal is now being extended so it can be used to run other models and on other HPC systems including all three of the NOAA’s HPC systems. This will potentially link the NOAA HPC facilities under a java application that can run on any computer and will permit modelers to access the HPC resources they need in order to run their models and conduct scientific research. Additional work is planned to link user selectable verification packages into the portal for further model analysis. References
[ I ] Govett, M., L. Hart, T. Henderson, J. Middlecoff, and D. Schaffer, The Scalable Modeling System: Directive-Based Code Parallelization for Distributed and Shared Memory Computers, Journal of Parallel Computing volume/issue: 2918 ~ ~ 9 91020, 5 - August 2003. [2] Govett, M, M. Doney, and P. Hyder, The Grid: An IT Infrastructure for NOAA in the 2 1 Century, Proceedings of the Eleventh ECMWF Workshop on the Use of Parallel Processors in Meteorology, Reading, UK, 25-29 October 2004.
[ 3 ] Henderson, T., C.Baillie, S.Benjamin, T.Black, R.Bleck, G.Carr, L.Hart, M.Govett A.Marroquin, J.Middlecoff, and B .Rodriguez, Progress Toward Demonstrating Operational Capability of Massively Parallel Processors at the Forecast Systems Laboratory, Proceedings of the Sixth ECMWF Workshop on the Use of Parallel Processors in Meteorology, ECMWF, Reading England, November 1994.
COMPUTATIONAL COST OF CPTEC AGCM J. PANETTA*, S. R. M. BARROSt, J. P. BONATTI*, S. S. TOMITA* and P. Y. KUBOTA* * INPE/CPTEC t IME/USP
Abstract This work describes development history, current characteristics and computational cost of of CPTEC atmospheric global circulation model. It derives and validates a computational cost model that predicts flop counts under semi-Lagrangian or Eulerian formulation on quadratic full or reduced grids, as well as semi-Lagrangian linear full or reduced grids. Cost of high resolution runs at all formulations are presented, compared and justified.
1
Introduction
Atmospheric global circulation models (AGCM) are central tools for numerical weather forecast, climate research and global change studies. Production weather centers worldwide continuously improve the quality and detail of daily AGCM numerical predictions, by including new physical parameterizations, enhancing model resolution and using advanced data assimilation systems. AGCMs are computationally expensive tools - their computational cost is related t o the forth power of horizontal resolution for fixed forecasting time and vertical resolution. A long lasting requirement for increasing AGCM resolution has been a driving force for the acquisition of powerful computcrs by national wcather centers and for the production of improved machinery by the computer industry. Algorithmic improvements in the last decade such as semi-Lagrangian dynamics and reduced grids, although maintaining the forth power dependency on resolution, have reduced the required number of floating point computations (flop), allowing production AGCM resolution to evolve at a faster pace than before. Execution time reductions by factors of 50 and 72 due t o the combined use of such improvements have been reported [Temperton 19991.
65
66
Even with such algorithm reductions, there has been considerable debate on the limits of increasing resolution imposed by the cost of the spectral transform (see [Temperton 19991 and cited references). At 10 km horizontal resolution over a classical AGCM formulation, the Earth Simulator reports that 60% t o 70% of the execution time is spend on the spectral method ([Shingu 2002]), even at record breaking processing speeds. This work quantifies AGCM cost as a function of rcsolution and algorithmic improvement, by developing, validating and applying a computational cost model that uses flop count, and not execution time, as a cost measure. This machine independent approach allows mapping to any machine, given estimates of execution time speeds for AGCM major components. It allows cost comparison among Eulerian and semi-Lagrangian formulations as well as quadratic, linear, full or reduced grid. Section 2 describes CPTEC AGCM development and production histories. Section 3 describes CPTEC AGCM current contents. Section 4 derives and validates the cost model, that is used to forecast high resolution costs at section 5. Conclusions are drawn in section 6 .
2
Development and Production Histories
CPTEC AGCM descends from the 1985 National Center for Environmental Prediction (NCEP) production model. NCEP granted code access to the Center for Ocean, Land and Atmosphere Studies (COLA) that included additional features, generating independent development tracks. CPTEC started operations in 1994 with a single processor NEC SX-3. The early production AGCM, named CPTEC-COLA, was COLA version 1.7 with local modifications on spectral truncation (from rhomboidal to triangular) and on vectorization. Continuous dynamics and physics modifications made by CPTEC generated new versions of CPTEC-COLA, departing from COLA continuous work. Around 1998, CPTEC-COLA was updated with the inclusion of COLA version 1.12 modifications. The acquisition of a NEC SX-4 shared memory parallel processor at 1998 required model parallelization, implemented by NEC parallel directives. Model description and computational characteristics of CPTEC-COLA production versions on the SX-4 are available at [Cavalcanti 20021 and [Tamaoki 19991. A long term model modernization project started at 2000. The project was centered on a review of model dynamics formulation to include semiLagrangian dynamics as an option to the original Eulerian dynamics. The project also required full code rewriting to accommodate the formulation review, t o allow choice of linear or quadratic, full or reduced grids, to simplify the inclusion of new physical process parameterizations and to
67
ease the introduction of OpenMP parallelism. Physical processes were also updated, with the insertion of Souza shallow cumulus parameterization [Souza 19991. This modernization plan marks a full departure from the original CPTEC-COLA and its versions. In late 2004, after 20 man-years of work, the resulting code (referred as CPTEC-OMP) faced pre-production runs. The acquisition of a multi-node NEC SX-6 in 2004 required the insertion of distributed memory parallelism without destroying shared memory parallelism. Meanwhile, successful research introduced a new set of physical process parameterizations such as Grell convection [Grell 20021, soil humidity initialization [Gevaerd 20061, CLIRAD and UKMO short wave radiation [Chou 1996, Tarasova 2006, Edwards 19961. This resulting model will be referred as CPTEC-MPI-OMP. Model development currently continues in at least three distinct directions. Generation of a massively parallel version targeting 1000 processors is about to start. CLIRAD and UKMO long wave radiation is being inserted and their accuracy being tested. Coupling with MOM-4 ocean model and with a chemistry mechanism generated by the SPACK preprocessor is also under way. CPTEC production history started in 1994 with a T062L28 CPTECCOLA Eulerian, quadratic and full grid formulation. Production model resolution and formulation was maintained until April 2000, when resolution was increased from T062L28 to T126L28. During the 1996-2000 period the production version was continuously enhanced by adjustments in physics parameterizations and execution time optimizations. In April 2005, production moved to an Eulerian, quadratic and reduced grid formulation of CPTEC-OMP at T213L42 resolution, which is the current production model. An T299L64 Eulerian, quadratic and reduced grid formulation of CPTEC-OMP is currently under pre-production tests, about to be promoted t o production. Meanwhile, a T511L64 resolution CPTEC-MPIOMP with semi-Lagrangian, linear and reduced grid formulation is being prepared for pre-production tests.
3 Model Description AGCM is a global spectral model. It allows runtime selection of six formulations: Eulerian or semi-Lagrangian dynamic models on quadratic full or reduced grids, as wcll as semi-Lagrangian linear full or reduced grids. These six model formulations will be represented by a three letter string where the first letter denotes dynamics (E for Eulerian or S for semiLagrangian), the second letter denotes quadratic (&) or linear (L) grid and the third letter full (F) or reduced (R) grid. As an example, SLF stands
68
for semi-Lagrangian, linear and full grid. AGCM is a hydrostatic model. Dynamics formulation uses a three time level scheme and vertical sigma coordinate to solve the usual primitive equations. An implicit horizontal diffusion scheme improves stability, allowing high resolution runs. Legendre transforms are formulated in matrix form, allowing the use of fast libraries. Dry physics is composed by SSiB land surface module, Mellow-Yamada level 2 turbulence, gravity wave drag parameterization, a choice of CLIRAD, UKMO or Lacis and Hansen short wave radiation and Hashvardhan long wave radiation. Wet physics contains a choice of Kuo, relaxed Arakawa-Schubert or Grell deep convection, a choice of Souza or Tiedk shallow convection and large scale convection. Model code comprises about 65000 lines. It is written in modular Fortran 90 with fully dynamical memory allocation. There are no common constructs - physics routines are argument driven, dynamics refer to global fields by use only, transform data structure is encapsulated. All variables are declared and all procedure arguments carry desired intent. Domain decomposition parallelism designed for a dozen nodes uses MPI 1.1 standard library calls. Within each domain partition, shared memory parallelism uses OpenMP 2.0 directives. Portability is achieved for 32 or 64 bits Linux systems, over Itanium or Xeon processors, with Intel or PGI compilers and MPICH or LAM MPI. Binary reproducibility is an achieved design goal. It is realised on each of these machines with any number of processors and parallelism scheme. Code design allows insertion of new column-based physics parameterizations without affecting parallelism, provided that the inserted code is thread safe. Physical processes were recoded to adhere to a coding standard where inner loops sweep atmospheric columns while outer loops deal with vertical levels. Physics is prepared t o process any non-empty set of atmospheric columns, accommodating both cache based and flat memory machines by specifying the number of atmospheric columns to be processed at each invocation.
4
Cost Model Derivation
This section derives an analytic model for the computational cost of CPTEC AGCM, measured by the number of floating point operations (flop) as a function of problem size (horizontal and vertical resolution) and AGCM formulation.
69
4.1 Notation A computational cost model is a linear combination of cost parcels weighted by their execution frequency, where each cost parcel accounts for one AGCM component. Since components are executed every timestep except for short and long wave radiation that are executed at fixed forecasting times, the cost model adds the cost of short and long wave radiation weighted by their own number of timesteps to the cost of remaining components weighted by the full number of timesteps. Meaning (number of) timestep executions short wave radiation executions long wave radiation executions vertical levels Fourier waves (model truncation plus one) spectral coefficients grid longitudes (zonal points) grid latitudes (meridional points) grid surface points grid surface points over land grid surface points over ocean or ice full fields for spectral to grid transforms surface fields for spectral to grid transforms full fields for grid to spectral transforms surface fields for grid to spectral transforms transform spectral contributions FFT cost component Table 1: Cost model input variables Table 1 contains cost model input variables and their meaning. Input variables are computed over a single vertical level (except for v). The last six table entries are designed for Legendre transform cost analysis and deserve further explanation. The number of transformed fields change with model formulation, since the Eulerian formulation demands more fields to be transformed than the semi-Lagrangian formulation. The last two variables are detailed at the transform cost analysis section 4.3. Cost analysis splits AGCM into three major components: dynamics, physics and transforms. A detailed cost for each component follows.
70
4.2
Dynamics
Dynamics is split into spectral dynamics and grid dynamics. Spectral dynamics mainly consists of double nested loops that sweep all spectral coefficients of all vertical levels of some fields. Cost can be modeled as ksv. But there are exceptions. Time splitting semi implicit computation cost dominates spectral dynamics cost. Since it contains one more loop over verticals, its cost is ksv2. We neglect other exceptions, such as the dissipative filter that has cost proportional to the number of dissipated associated Legendre functions. These neglected items are assimilated by the constants that will vary with model formulation. Spectral dynamics cost model is kl sv2
+ kz sv
where kl and k2 are constants to be determined. Grid dynamics has a similar form. It mainly consists of double nested loops that sweep all grid points of all fields, except for the computation of geopotential gradient that contains a third loop over verticals. As in spectral dynamics, we neglect model formulation impact, accepting one constant value for each formulation. Grid dynamics cost model is k3gv2
where
k3
and
4.3
Transforms
k4
+ k4gv
are constants to be determined.
First, consider spectral to grid transform. Split the transform into two components, the spectral to Fourier and Fourier to grid transforms. Spectral t o Fourier transform consists of two parts: the generation of even and odd Fourier components from spectral coefficients and the composition of Fourier coefficients from its even and odd components. Both parts are computed for each vertical of every transformed field. For a single field vertical, even and odd Fourier components are generated by inner products of spectral coefficients with associated Legendre functions. Inner product length decreases linearly with Fourier wave number and there is one even (odd) inner product for each latitude and Fourier wave number. On full grids, this cost could be modeled by kpmf(f + 3), where f ( f + 3) arises from adding inner products of decreasing lengths over all Fourier waves. But on reduced grids, inner product length and number of Fourier waves vary with latitude, as specified by the Courtier-Naughton criteria. Instead of approximating this cost, we take the sum of inner product lengths over all latitudes as a cost model input parameter (denoted by
71
c). Consequently, the cost of computing even and odd Fourier components is kc for each field vertical. The composition of Fourier coefficientsfrom even and odd components is proportional to the number of Fourier coefficients. For a single field vertical, this cost occurs for every latitude, being modeled by kpm f . Including the number of verticals t o be transformed and bringing together the two parts, the spectral to Fourier transform cost is modeled by k5(Sf" + SS)(C+Prnf) where Sfv + Ss accounts for all transformed verticals (varying with model formulation), c accounts for the inner products and p m f accounts for obtaining Fourier coefficients from even/odd components. Fourier to grid transform consists of FFTs of length equal to the number of Fourier waves. One FFT is computed for each latitude and transformed field vertical. On full grid formulations FFT lengths are latitude independent, leading to the kpmflogz(f) cost function for each field vertical, since an FFT of length f has cost flog,(!). But on reduced grids FFT length vary with latitude, introducing an unacceptable large error. After extensive experimentation, we adopted FFT cost for each transformed vertical as k F , where F is the sum over latitudes o f f log,(f), with f varying with latitude. Value of F is computed during AGCM initialization. With that, we modeled Fourier t o grid transform by k6(sfu
+ Ss)F
Even with this very simple model, accuracy is not fully acceptable. Measurements show that f log, (f) is not a good estimate of the FFT computational cost at the range of FFT sizes used during experimentation. No better alternative was found. We now consider the grid t o spectral transform, that has cost similar to the spectral to grid transform, except for the number of transformed fields. The grid to Fourier transform cost is similar t o the Fourier to grid cost and will be modeled by k77GfU + Gs)F while the Fourier t o spectral transform has cost similar to the spectral to Fourier transform and will be modeled by
k8(Gfv f Gs)(c+prnf)
4.4
Physics
Split physics into wet physics and dry physics. Furthermore, split dry physics into short wave radiation, long wave radiation and the remaining dry physics.
72
Wet physics has cost proportional to the number of grid points and verticals, being modeled by k9gv
Short wave radiation contains double nested loops that sweep all grid points at all vertical levels. Consequently, short wave radiation is modeled by hogv Long wave radiation contains double and triple nested loops. Double nested loops sweep all grid points at all vertical levels. Triple nested loops include one more loop over verticals, which is a triangular loop. A detailed analysis leads t o kllg(v 2)(v + 1) + h 2 g w
+
where the first term covers triple nested triangular loops and the second term double nested loops. Remaining dry physics is split into five components: turbulence, gravity wave drag, land surface, sea and ice surface and all the remaining dry physics computations. These components were selected due to nonneglecting costs on low resolution models and variable cost factors. Turbulence, gravity wave drag and the remaining dry physics consists of double nested loops over all horizontal grid points and a variable number of verticals. Turbulence is modeled by leaving out one vertical level, while additional levels where incorporated on gravity wave drag and the remaining dry physics cost models, accounting for intermediate grid point computations. These three components are modeled by k13g(V - 1) f k14g(w f
1) f
k15g(V
+ 6)
where the first component represents turbulence, the second component represents gravity wave drag and the last component the remaining dry physics. Surface computation is proportional to the number of surface points, while sea and ice computations is also proportional to the number of sea and ice points. Both are modeled by k16.91
4.5
+ kl7go
Including timesteps
Computational cost for a fixed forecasting time is the sum of the previously established costs multiplied by their execution frequency. As previously stated, short and long wave radiation have a fixed execution frequency,
73
while remaining cost components have an execution frequency that depends on resolution. Total computational cost is then modeled by
4.6
Computational Complexity
Relative impact of each cost component on the total cost requires knowledge of the values of constants k1 to k17. But dominating cost factors as resolution increases result from complexity analysis. Complexity can be written as a function of spectral truncation ( t ) and vertical levels (w). It suffices to write cost model input variables of table 1 as a function of both variables. Fourier waves ( f ) ,longitudes and latitudes (pz and p m ) are O(t). FFT cost component ( F ) is O(t2log2(t)). Spectral coefficients (s) and grid surface points (9, 91, g o ) are O ( t 2 )while transform spectral contributions ( c ) is O ( t 3 ) .It should be noted that the number of timesteps (nt) is O ( t ) due to the CFL stability criteria, while short and long wave radiation frequencies (n, and nl) are kept constant. Direct substitution and O ( ) analysis leads t o O ( t 2 v 2 )complexity for dynamics, O(t3v)for transforms, O(t2w)for short wave radiation, O ( t 2 v 2 ) for long wave radiation and O ( t 2 v )for remaining physics. Consequently, components complexity is either O ( t 3 v )if t varies faster than w or O ( t 2 v 2 ) otherwise. Inserting timestep dependency on t and assuming that t grows faster than w,computational complexity of AGCM is dominated by the transform O ( t 4 v )complexity.
4.7
Experimental Setting
Values of k1 to k17 were obtained experimentally. AGCM was instrumented t o report flop count measured by NEC SX-6 hardware counters at the seventeen selected code sections. Instrumentation also reported flop count for the entire integration, for cost model validation purposes. For each of the six formulations, AGCM executed a 24 hours forecast on nine grid resolutions: 210km281ev, 105km281ev, 78km281ev, 78km421ev,
74
63km421ev7 52km421ev752km641ev, 42km641ev and 35km641ev. A tentative value for each of the seventeen constants resulted from the division of measured flop count by corresponding cost factor computed at experimented resolution. This procedure generated nine tentative values for each constant at each formulation. A least squares procedure over the nine tentative values produced final constant value for each formulation. Table 2 contains the least squares fitted constant values for each model formulation.
Table 2: Measured constants Cost model derivation assumes that constant values vary with formulation, but some fluctuations require explanation. Linear spectral dynamics constant (k2) variation is attributed to the unmodeled dissipative filter. Large variation on linear grid dynamics constant (kq) is attributed to the semi-Lagrangian transport, absent on the Eulerian formulation. FFT constant values variation ( k g and k ~ are ) due to an unsatisfactory flog&) cost approximation at the problem range tested. Gravity wave drag constant variation (k14) has unknown reasons. An indication of model accuracy is the spread of tentative constant values that are input at each least squares procedure. It is natural to expect that the set of nine tentative values for each constant are spread around the computed least square value. Spread was measured by the
75
standard deviation of each set of nine tentative values. The maximum standard deviation over all formulations (denoted by g ) is reported at the last column of table 2, showing an exceptionally tight data spreading that indicates adequate representation of selected AGCM cost components. Table 2 indicates that some constant values seem to be grouped on clusters at full and reduced grids. Value of k~ is 4.98 for all full grids and close to 5.88 for all reduced grids. A similar effect occurs on k7 and k14, and at semi-Lagrangian components of k4. But kz has other clustering form: its value does not change with reduced or full grid, but changes from Eulerian to semi-Lagrangian and from quadratic to linear grid. Further research is required to fully understand the cost model behavior.
4.8
Cost model validation
The cost model is validated by comparing predicted and measured costs on two problem size ranges: inside and outside the range used to obtain the seventeen constants. Validation within the problem range used to compute the constants does not require further AGCM executions, since each execution reports total flop count for the entire integration. It suffices to quote cost model predicted costs with reported flop counts. Figure 1 reports prediction error, computed by (1 - p / m ) where p is predicted cost and m is measured cost. It shows that cost model error is bellow 2% and that error reduces as resolution increases. Model validation outside used problem range require further AGCM executions. AGCM was executed on the SQR formulation at 20km961ev resolution with a semi-Lagrangian timestep equals to six Eulerian timesteps. Reported flop count of 173.842 TFlop compares favorably with cost model prediction of 172.718 TFlop, producing a cost model prediction error of 0.65%.
5
Predicted Costs
This section uses the cost model to estimate flop count at high grid resolutions and spectral truncations. For the high grid resolution case, spectral truncation changes from quadratic to linear grid formulation to accommodate the fixed grid resolution. High spectral truncation case takes the inverse direction, fixing spectral truncation and changing grid resolution accordingly.
76 Modeling Error 2,OOX
1,0036
0,OWh
-1,OO%
'
I . E ~ F . ~ ~ RUSQR ~ sMSLF Q FmSLRJ
Figure 1: Modeling error
5.1
Fixed grid resolution
Table 3 contains flop count (in TFlop) predicted by the cost model to forecast a single day at selected high resolutions. It also contains the cost of major AGCM components in absolute and relative terms. Data was generated with a fixed radiation invocation frequency and a semi-Lagrangian timestep that is the triple of the Eulerian timestep. Summarizing, the Eulerian quadratic has the highest flop count of all formulations, followed by semi-Lagrangian linear, followed by semi-Lagrangian quadratic. As expected the reduced grid is cheaper to compute than the full grid. A detailed analysis follows.
5.1.1
Eulerian to Semi-Lagrangian Quadratic
Eulerian quadratic flop count is 1.9 to 2.6 times higher than the corresponding semi-Lagrangian quadratic count, which is lower than expected since timestep was reduced by a factor of three. Dynamics cost is slightly higher on semi-Lagrangian than on Eulerian formulation (about 8%on full grids and about 2% on reduced grids). SemiLagrangian dynamics has the expensive transport cost that is absent on the
77
Form
591 104 200 895
(66%) (12%) (22%)
(73%) (10%) (17%)
7576 734 1055 9364
(81%) (8%) . (11%) .
249 53 116 418
(60%)
Trans Dyna Phys Total
194 39 78 311
(62%) (13%) (25%)
461 76 134 671
(69%) 1432 (11%) 182 (20%) 278 1892
(76%) (10%) (15%)
5879 534 696 7109
(83%) (8%) (10%)
Trans Dyna . Phys Total
71 58 84 213
(33%) (27%) (40%)
169 112 138 419
(40%) (27%) (33%)
530 273 270 1073
(49%) (25%) (25%)
2186 795 613 3594
(61%) (22%) (17%)
Trans Dyna Phys Total
55 40 57 152
(36%) (26%) (37%)
132 78 92 302
(44%) (26%) (31%)
411 187 177 776
(53%) (24%) (23%)
1697 546 403 2646
(64%) (21%) (15%)
Trans Dyna Phys Total Trans Dyna Phys Total
150 63 84 297
(55%) (21%) (24%)
357 121 138 615
(58%) (20%) (22%)
1132 294 270 1696
(67%) (17%) (16%)
4734 856 613 6203
(76%) (14%) (10%)
117 45 57 219
(58%) i20%j (22%)
279 86 92 456
(61%) (19%) (20%)
881 206 177 1264
(70%) (16%) (14%)
3675 604 403 4682
(78%) (13%) (9%)
EQR
SQR
SLF
I SLR
10km961ev
1846 252 422 2520
Trans Dyna Phys Total
EQF
SQF
15km961ev
20km961ev
25km961ev
Comp
I
(13%) (28%)
I
\
I
I
\
I
I
J
\
I
Table 3: Predicted TFlop per forecasting day for fixed grid resolutions Eulerian formulation, but the cost of remaining semi-Lagrangian dynamics components is reduced (with respect to similar Eulerian components) by the larger semi-Lagrangian timestep. Consequently, the increase cost due to transport is almost balanced by the timestep reduction. Transform cost decreases from Eulerian t o semi-Lagrangian by a factor larger than timestep increase (about 3.5), due to the reduction on the number of transformed fields. Physics cost decreases by a factor of 1.3 to 1.7, which is lower than expected. That is due to the fixed (timestep independent) radiation cost. Summarizing, the high gain on transform is reduced by the expensive radiation.
11
78
5.1.2
Semi-Lagrangian Quadratic to Linear
On a fixed grid resolution, the semi-Lagrangian linear formulation is more expensive than the semi-Lagrangian quadratic formulation due to the increase (about 50%) on spectral truncation. Dynamics cost barely changes, since the dominant cost on semi-Lagrangian formulations - the transport - is computed on the fixed grid. Transform cost is increased by a factor of 2.1 from quadratic to linear, due t o the nonlinear (with respect to spectral truncation) cost. Physics cost does not change since grid size, timestep and radiation frequency are identical on both formulations. When fixed physics cost is added to about fixed dynamics cost and to increased transform cost, the semi-Lagrangian linear formulation cost is from 1.4 to 1.7 times the grid equivalent semi-Lagrangian quadratical cost.
5.1.3
Full to Reduced Grids
A Reduced grid has about 33%less points than the corresponding full grid, but flop count reduction is about 26%. Dynamics cost does not scale linearly with grid point reduction since the constant spectral coefficient count propagates to a constant spectral dynamics cost, reducing the gain at dynamics to about 28%. At transforms, FFT lengths are decreased at the same ratio as grid points. But the reduction does not scale linearly to the transform cost (reduced by 22%), since both FFT and Legend.re transform costs are nonlinear on the number of Fourier waves. Physics cost reduction is linear (33%) since all physics cost terms are linear on the number of grid points. Summarizing, fixed spectral dynamics and nonlinear transform costs reduce the linear gain on physics. 5.1.4 Increasing Semi-Lagrangian Timestep
The cost model predicts the impact of increasing semi-Lagrangian timestep from the triple Eulerian timestep baseline to four, five and six Eulerian timesteps. Figure 2 reports relative (to baseline) cost of increased timesteps on the SQR formulation. Relative cost of a linear gain (to baseline) is shown as a reference. Cost does not scale linearly with timestep increase due to the fixed radiation cost. For a fixed grid resolution timestep enhancements have decreasing returns, due to the fixed radiation cost. For a fixed timestep enhancement, increasing resolution has increasing returns, due to increasing weight of transforms (higher complexity) that lowers the impact of radiation on total cost.
79 81%
79%
75%
c 25km64
....-
10h
Link.
Figure 2: Semi-Lagrangian Quadratic Relative Cost
5.2
Fixed spectral truncation
Table 4 compares formulation costs for fixed spectral truncation and variable grid resolution. It is identical to table 3 except on the linear grid - remaining cases are repeated for the benefit of the reader. As in table 3, costs are reported in TFlop for a single forecast day, data was generated with the same fixed radiation invocation frequency and with a semi-Lagrangian timestep that is the triple of the Eulerian timestep. For fixed spectral truncations, semi-Lagrangian linear formulation has the lowest flop count , followed by semi-Lagrangian quadratic and Eulerian quadratic. Full grid requires more flops than reduced grid. Trading the more expensive formulation (EQF) by the cheapest formulation (SLR) reduces flop count by an impressive factor of six. For a detailed analysis, it suffices to study the semi-Lagrangian quadratic to linear transition.
5.2.1
Semi-Lagrangian quadratic to semi-Lagrangian linear
The cost of semi-Lagrangian linear is about 53% to 59% of the semiLagrangian quadratic cost. Profit comes from a 44% reduction on grid
80
Form
Comp
EQF
Trans Dyna Phys
T533L96
SLF
Trans Dyna Phys
SLR
Trans Dyna Phys
249 53 116 418 194 39 78 311 71 58 84 213 55 40 57 152 46 28 38 112 36 20 26
Total
82
.
Total EQR
Trans Dyna Phys
Total SQF
Trans Dyna Phys
Total SQR
Trans Dyna Phys
Total
Total
(60%) (13%) (28%) (62%) (13%) (25%) (33%) (27%) (40%) (36%) (26%) (37%) (41%) (25%) (34%) (44%) (25%) (31%)
T666L96 591 104 200 895 461 76 134 671 169 112 138 419 132 78 92 302 117 58 67 242 92 41 45 117
(66%) (12%) (22%) (69%) (11%) (20%) (40%) (27%) (33%) (44%) (26%) (31%) (48%) (24%) (28%) (52%) (23%) (25%)
T888L96 1846 252 422 2520 1432 182 278 1892 530 273 270 1073 411 187 177 776 342 130 120 592 267 92 79 438
(73%) (10%) (17%) (76%) (10%) (15%) (49%) (25%) (25%) (53%) (24%) (23%) (58%) (22%) (20%) (61%) (21%) (18%)
T1279L96 7576 734 1055 9364 5879 534 696 7109 2186 795 613 3594 1697 546 403 2646 1422 380 272 2075 1107 270 180 1557
(81%) (8%) (11%) (83%) ( 8%) (10%) (61%) (22%) (17%) (64%) (21%) (15%) (69%) (18%) (13%) (71%) (17%) (12%)
Table 4: Predicted TFlop per forecasting day for fixed spectral truncations point count, since both latitude and longitude counts on linear grids are 213 of corresponding quadratic grid figures. Dynamics reduction factor of 50% is a linear combination of no gain in spectral dynamics, due to the fixed spectral truncation, with a 44% gain in grid dynamics, due to grid point count reduction. Transform cost is reduced to 66% due to 2/3 reduction factor on number of latitudes. Physics cost reduction (about 44/since physics cost is linear on grid points. Summarizing, physics gains due to the linear grid are attenuated by the fixed spectral dynamics cost and the lower gain at transforms.
81
5.2.2
Increasing semi-Lagrangian timestep
Increasing timestep on the semi-Lagrangian linear reduced formulation over fixed spectral truncation generate gains similar to those achieved at the fixed grid resolution (section 5.1.4). Figure 3 contains the corresponding data, generated and reported as previously. 81% 70%
77%
I
9
t
%
T533L96
T1279L96 / E a t E5dt 0 6 d t l
Figure 3: SLR relative cost as timestep increases Conclusions are similar - cost reduction is attenuated by a fixed radiation cost. Although cost figures are similar, gains on the fixed truncation case (SLR) are higher than on the fixed grid case (SQR), due to lighter radiation relative cost on the linear grid cost composition than on the quadratic grid cost composition.
6
Conclusions
This work quantifies the computational cost of CPTEC AGCM. It derives, validates and applies a cost model that reports AGCM flop count, given input resolution and formulation. The cost model is machine independent but also AGCM dependent, since computational cost depends upon specific implementations.
82
The cost model shows that the Eulerian, quadratic and full AGCM formulation (the classical formulation) requires 9.3 PFlop for a single forecast day at 10 km, 96 level resolution. The use of reduced grid and semiLagrangian dynamics with a triple timestep reduces flop count to 2.6 PFlop. Moving t o a linear grid may reduce flop count to 1.5 PFlop, if spectral truncation is kept constant, or increase flop count to 4.6 PFlop if grid resolution is kept constant. Consequently, moving from the Eulerian quadratic full formulation to a semi-Lagrangian linear reduced formulation with a triple timestep reduces flop count by a factor of 6.2. Larger gains can be achieved by increasing the semi-Lagrangian timestep, if forecast quality is not compromised. Assuming that forecast quality is accepted when using a six-fold timestep, cost of semi-Lagrangian quadratic is reduced to 55% of the triple timestep cost, reaching 1.4 PFlop, while cost of semi-Lagrangian linear is reduced t o 54%,demanding 0.8 PFlop - a reduction factor of about 12 from the classical formulation. These cost reduction factors are explained by a detailed analysis of dynamics, transforms and physics costs. Given execution time restrictions of production runs, is it possible to enhance production spectral truncation up t o T1279L96 in the near future? Elementary arithmetic over cost model data results that a 2.32 TFlops effective execution speed is required to execute a 15 days forecast in 1.5 hours with the SLR formulation (15 km grid resolution), and a 4.03 TFlops effective execution speed is required for the SQR formulation (10 km grid resolution). Such speeds are way below target speeds of the next generation of machines. These conclusions should be taken with caution. Variations on timestep length and radiation invocation frequency cause large changes on forecasted cost. The quality of numerical results is unknown. Its dependency on semiLagrangian timestep is also unknown. Finally, cost model dependency on AGCM details should be always stressed.
Acknowledgment The authors would like to thank George Mozdzynski for helpful suggestions that substantially increased the quality of this paper.
References [Cavalcanti 20021 Cavalcanti, I. F. A. et alEi, Global Climatological Features in a Simulation Using the CPTEC-COLA AGCM, Journal of Climate, 15 No. 21, 2002.
83
[Chou 19961
Chou, M. D. and Suarez, M. J.: A Solar Radiation Parameterization (CLIRAD-SW), NASA Tech. Mem. 104606, 1996
[Edwards 19961
Edwards, J.M. and Slingo, A.: Studies with a flexible new radiation code, I: Choosing a configuration for a large-scale model, Q. J. Royal Meteorol. SOC,122, 1996.
[Gevaerd 20061
Gevaerd, R. and Freitas, S.R.: Estimativa operacional da umidade do solo para iniciaqiio de modelos de previsiio numkrica da atmosfera, Parte I: DescriCiio d a metodologia e validaqk. Revista Brasileira de Meteorologia 21, 2006.
[Grell 20021
Grell, G. and Devenyi, D.: A generalized approach t o parameterizing convection combining ensemble and data assimilation techniques, Gophys. Res. Lett. 29, 2002.
[Shingu 20021
Shingu, S. er alla A 26.58 Tflops Global Atmospheric Simulation with the Spectral Transform Method on the Earth Simulator, Proceedings SC2002, 2002.
[Souza 19991
Souza, E. P.: Estudo tedrico e nurn4rico da relaqGo entre convecpio e superflcies heterogsnias na RegiGo AmazGnica, PhD Dissertation, University of S k Paulo, 1999.
[Tamaoki 19991
Tamaoki, J. N., Bonatti, J. P., Panetta, J. and Tomita, S. S.: Parallelizing CPTEC's General Circulation Model, Proceedings of the 11th Symposium on Computer Architecture and High Performance Computing SBAC-PAD, 1999.
[Tarasova 20061
Tarasova, T.A., Barbosa, H.M.J., Figueroa, S.N.: Incorporation of new solar radiation scheme into CPTEC AGCM , INPE-14052-NTE/371, 2006.
[Temperton 19991 Temperton, C.: An overview of recent developments in numerical methods for atmospheric modeling, Recent developments in numerical methods for atmospheric modeling, ECMWF Seminar Proceedings, 1999.
LARGE-SCALE COMPUTATIONAL SCIENTIFIC AND ENGINEERING CODE DEVELOPMENT AND PRODUCTION WORKFLOWS D. E. POST' and R. P. KENDAL12 'DoD High Performance Computing Modernization Program 2CarnegieMellon University Software Engineering Institute
Overview Computational science and engineering (CSE) is becoming an important tool for scientific research and development and for engineering design. It is being used to make new scientific discoveries and predictions, to design experiments and analyze the results, to predict operational conditions, and to develop, analyze and assess engineering designs. Each application generally requires a different type of application program, but there are important common elements. As computer power continues to grow exponentially, the potential for CSE to address many of the most crucial problems of society increases as well. The peak power of the next generation of computers will be in the range of 10'' floating point operations per second achieved with hundreds of thousands of processors. It is becoming possible to run applications that include accurate treatments of all of the scientific effects that are known to be important for a given application. However, as the complexity of computers and application programs increases, the CSE community is finding it difficult to develop the highly complex applications that can exploit the advances in computing power. We are facing the possibility that we will have the computers but we may not be able to quickly and more easily develop large-scale applications that can exploit the power of those computers. T o study this issue, we have conducted case studies of many large scale CSE projects and identified the key steps involved in developing and using CSE tools'. This information is helping the computational science and engineering community understand the processes involved in developing and using large-scale CSE projects, and identify the associated bottlenecks and challenges. This will facilitate efforts to develop and implement productivity improvements in computer architectures and in the software support infrastructure. This information can also used as a blueprint for new projects. While CSE workflows share many features with traditional Information Technology (IT) software project workflows, there are important differences. IT projects generally begin with the specification of a detailed set of requirements2.The requirements are used to plan the project. In contrast, it is generally impossible to define a precise set of requirements and develop a detailed software design and workplan for 84
85
the development and application of large-scale CSE projects. This is not because CSE projects have no requirements. Indeed, the requirements for CSE projects, the laws of nature, are very definite and are not flexible. The challenge computational scientists and engineers face is to develop and apply new computational tools that are instantiations of these laws. CSE applications generally address new phenomena. Because they address new issues, they often exhibit new and unexpected behavior. Successful projects identify the properties of nature that are most important for the phenomena being studied and develop and implement computational methods that accurately simulate those properties. The initial set of candidate algorithms and effects usually turns out to be inadequate and new ones have to be developed and implemented. Successful code development is thus a “requirements discovery” process. For these reasons, the development and use of CSE projects is a complex and highly iterative process. While it is definitely not the waterfall model, it does share some of the features of more modern software engineering workflows such as the “spiral” development model’. A typical CSE project has the following steps (Figure 1):
Figure 1. Seven development stages for a computational science project.
Formulate Questions and Issues Define the high level requirements and goals (including the phenomenon to be simulated or analyzed), the stakeholders (the application users and customers, the sponsors, the developers, the validation community, and the computer support), the general approach, the important physical effects necessary for the simulation of a particular phenomenon, and the criteria for success. 2 . Develop Computational and Project Approach Define the detailed goals and requirements; seek input from customers; select numerical algorithms and programming model; design the project including the 1.
86
code architecture; identify the modules and specify interfaces for the individual modules; recruit the team; get the resources; and identify the expected computing environment. 3. Develop the Program Write and debug the program, including the individual modules, input and output packages, and code controllers. 4. Perform Verification & Validation Define verification tests and methodology; set up regression test suites and run them; define unit tests and execute them; define useful validation experiments; design and conduct validation experiments; and compare the validation data with code results. 5. Make production runs Setup problems, schedule runs, execute runs, and store the results. 6. Analyze computational results Begin analysis during the production run to optimize it; store, analyze and assess the results at the conclusion of the production run; and document the results, analysis and conclusions. Then develop hypotheses and test them with further runs. 7. Make decisions Make decisions based on the analysis of the results; document and justify the decisions; develop plans to reduce uncertainties and resolve open questions; and identify further questions and issues. These large tasks strongly overlap each other. There is usually a lot of iteration among the steps and within each step. Quite commonly, it turns out that some of the candidate algorithms are not sufficiently accurate, robust, stable or efficient, and new candidate algorithms need to be identified, implemented and tested. Similarly, comparison with experimental data (validation) usually shows that the initial set of physical phenomena does not include all of the effects necessary to accurately simulate the phenomenon of interest. The project then needs to identify the effects that were not included in the model but are necessary for accurate simulations, and incorporate them in the application, and assess whether the new candidate effects arc adequatc for simulating the target phenomenon. Often this series of steps will be iterated many times. Another key aspect of CSE project workflows is the project life cycle (Figure 2 ) . Large-scale CSE projects can have life cycle of 30 to 40 years or more, far longer than most Information Technology
87
projects. The NASTRAN engineering analysis code was originally developed in the 1960s and is still heavily used today3. In contrast, the time between generations of computers is much shorter, often no more than two to four years. A typical major CSE project has an initial design and development phase (including verification and initial validation), that often lasts five or more years (Fig. 2). That is followed by a second phase in which the initial release is further validated, improved and further developed based on experience by the users running real problems. A production phase follows during which the code is used to solve real problems. If the project is successful, the production phase is often the most active development phase. Once the code enters heavy use, many deficiencies and defects become apparent and need to be fixed, and the users generate new requirements for expanded capability. The new requirements may be due to new demands by the sponsor or user community, to the desire to incorporate new algorithmic improvements or to the need to port to different computer platforms. Even if no major changes are made during the production phase, substantial code maintenance is usually required for porting the code to different platforms, responding to chan in the computational infrastructure and fixing problems due to non-optimal initial design choices. The rule of thumb among many major CSE projects is that about 1 full-time employee ( R E ) of software maintenance support is needed for each 4 FTEs of users.
Tvf;loicalLarne-scale CSE Project Life Cycle
custarnere
Figure 2. Typical Large-scale Computational Science and Engineering Proiart 1 ife r v r k
Historically, many, if not most, CSE codes have included only a limited number of effects and were developed by teams of 1 to 5 or so professionals. The few CSE codes that were multi-effect generally developed one module for a new effect and added it to the existing code
88
(Figure 3). Once the new module had been successfully integrated into the major application, the developers then started development of the next module. This approach had many advantages. It allowed the developers and users to extensively use and test the basic capability of the code while there was time to make changes in the choices of solution algorithms, data structures, mesh and grid topologies and structures, user interfaces, etc. The users were able to verify and validate the basic capability of the code. Then they were able to test each new capability as it was added. The developers got rapid feedback on every new feature and capability. The developers of new modules had a good understanding of the existing code because many of them had written it. It was therefore possible to make optimum trade-offs in the development of good interfaces between the existing code and new modules. On the other hand, serial development takes a long time. If a code has four major modules that take 5 years to develop, the full code won’t be ready for 20 years. Unfortunately, by then the whole code may be obsolete. Certainly the code will have been ported to new platforms many times.
calendar time (years)
5
10
15 I
20
25
30
35
I
Prior generation of simulation codes serve as prototypes for new generation
Figure 3 Historic CSE Code Development Workflow for serial development.
To overcome these limitations, multi-effect codes are now generally developed in parallel (Figure 4). If a code is designed to include four effects, and the modules for each effect take five years to develop, then the development team will consist of twenty members plus those needed to support the code infrastructure. If all goes well, the complete code with treatments of all four effects will be ready five or six years after the start of the project instead of twenty years. Because the development teams are much larger, and the individual team members often don’t have working experience with the modules
89
and codes being developed by the other module sub-teams, the software engineering challenges are much greater. Parallel development also increases the relative risks. If the development of a module fails, a new effort has to be started. If one out of four module initial development efforts fail, then the total development time is increased by 100% compared to only a twenty-five percent increase with serial development.
Prior generation of similar CSE codes (prototypes)
calendar time (years)
15
20
25
30
35
Package C failed to be delivered in working form
Requirementsfrom sponsors and users (modified as program needs dictate)
Figure 4 Parallel project development workflow
Task Categories Code development and production runs involve many different types of activities. In general each requires different tools and methods. We were able to define four broad categories of tasks that required different development and production tools. The potential suppliers for these tools and methods include platform vendors, commercial third party vendors, academic institutions, and open source developers. I. Code Development computing environment: This includes the computer operating system (e.g. Linux, AIXTM, True64TM, etc.), text editors, interactive development environments (e.g.
90
Eclipse), languages and compilers (Fortran, C, C++, JAVA, etc.) including language enhancements parallel computers (Co-array Fortran, UPC, HPF, OpenMP, Pthreads, etc.), parallel communication libraries (e.g.MPI), symbolic mathematics and engineering packages with a high level of abstraction (MathematicaTM, MapleTM, MatlabTM, etc.), interpretative and compiled scripting languages (PERL, Python, etc..), debuggers (e.g TotalviewTM), syntax checkers, static and dynamic analysis tools, parallel file systems, linkers and build tools (e.g. MAKE), job schedulers (e.g. LFSTM), job monitoring tools, performance analysis tools (e.g. VampirTM,Tau, Open Speedshop,..), etc. This software can either be supplied by the platform vendor or by third parties. For instance, AIXTMis supplied by IBMTM. TotalView TechnologiesTMmarkets the debugger TotalViewTM.
11. Production Run computing environment: This includes running the code and collecting and analyzing the results. Many of the tools for the code development environment are required (operating system, job scheduler, etc.). In addition there are specific tasks that involve problem setup (e.g. generating a mesh, decomposing the problem domain for parallel runs, etc.), checkpoint restart capability, recovery from faults and component failures (fault tolerance), monitoring the progress of a run, storing the results of the run, and analyzing the results (visualization, data analysis,..). Some of this software is supplied by the platform vendor and some by third parties. CEITM, for instance, markets EnsightTM, a massively parallel 3D Visualization tool. Research SystemsTMmarkets IDLTM,a data analysis tool. A key task is verification and validation which requires tools for comparing code results with test problem results, experimental data and results from other codes. 111. Software engineering and software project management: These tasks involve organizing, managing and monitoring the code development process. Tools that would help with this task include configuration management tools (e.g. CVS, PerforceTM, RazorTM,..), code design and code architecture (e.g. UMLTM)although there are few examples of code design tools being used for HPC applications, documentation tools (word processors, web page design and development tools,. ..), software quality assurance tools, project design tools, and project management tools (Microsoft ProjectTM PrimaveraTM, etc.). Most of these are supplied by commercial third party vendors. Development of code development collaboration tools for multiinstitutional code development teams will also be important in the future (probably a third party task).
91
IV. Computational algorithms and libraries: These tasks involve development and support of computational algorithms and libraries that are incorporated into working code. These include computational mathematics libraries (e.g. PETSc, NAGTM, HYPRE, and Trilinos.), physical data libraries, etc. These are supplied by computer platform vendors, commercial vendors, academic and national laboratory institutions, and the open source community. For tasks that call for selection of an approach or method, the expectation is that the vendor will provide options and some guidance (documentation and consultation) on which approach or method is most appropriate for a set of specific requirements. In general a formal tool for making the selection is not required. For each step, the set of software that forms the development and run environment software infrastructure is listed under each major stage.
Development and Production Workflows The development and production workflow for a typical CSE project is highly iterative and exploratory. Each stage of software development involves many steps that are closely linked. If the steps can be completed successfully, the work proceeds to the next step (Figure 5). For most realistic cases, multiple issues arise at each step and resolution of the issues often requires iteration with prior steps. The detailed architecture of the code evolves as the code is developed and issues are discovered and resolved. The degree to which each step becomes a formal process depends on the scale of the project. A small project involving only one or two people need not devote a lot of time to each process. Nonetheless, even small projects will go through almost all of the steps defined below. It is thus worthwhile for almost all projects to go through the checklist to ensure that they don’t miss a step which would be simple to address early in the project, but difficult much later in the project. Throughout the document we definc stakeholders as everyone who has a stake in the project including the sponsors, the users and customers, the project team, the project and institutional management, the groups who provide the computer and software infrastructure, and sub-contractors. Sub-contractors include everyone who develops and supplies crucial modules and software components for the project, but who are not part of the project team and not under the direct control of the project management.
93
I. Formulate Questions, Issues and General Approach The time scale for this phase is generally 3 months to a year. The first step involves assessing the state of the science and engineering, its potential for solving the problem of interest, and the development of a roadmap and high-level plan for project. A key element is the assessment of prior and existing methods for solving this problem and their strengths and weaknesses. Prior and existing computational tools provide highly useful prototypes for the proposed project since they provide examples of the utility of successful computational tools in the subject area, the improvements needed in the domain science and engineering, and the methods and algorithms that have been successful and the strengths and weaknesses of those methods. These help identify potential sponsors, users, stakeholders and domain experts. For the science community, this phase would result in a proposal for submission to a funding agency (e.g. NSF, DOE SC, etc.). This phase also would provide a document that will be essential for developing a customer base, getting additional support, and communicating the project goals, purpose, and plan to the stakeholders, including prospective project team members.
I.
Identify the key issue to be addressed Such as: model the climate, predict the weather, simulate nuclear weapons, design an airplane, analyze signal data, simulate a battlefield, etc.; identify why it is important to address the issue; identify the benefits of successfully addressing the issue using the project that is being proposed; why the proposed project is an important, if not crucial, advance over existing methods; and identify the expertise needed to address the issue. 2. Identify the potential sponsors, customers and stakeholders. -Collectively they form the community that will support, use and develop the code. 3. Gather initial requirements: -Survey the potential sponsors, customers and stakeholders for their input for the proposed requirements. These are high level requirements, but they are needed to start planning. This not only is essential for developing a solid foundation for the project, but provides a good start for getting buy-in from all the stakeholders. A major “lesson learned” from case studies is that if a project doesn’t meet the requirements of the stakeholders, it will fail. Requirements gathering is a continuous process and demands constant interaction with the stakeholders. 4. IdentiJj, the high level requirements and goals. -
94
-State them in sufficient detail that the overall architecture of the code can be developed from them. 5. Identify the deliverables. -Identify the calculational capability that the code will provide and the problems it will solve, e.g. a code that trained aeronautical engineers can use to design and assess the flight control systems for a military 1.5 Mach jet fighter. 6. Identify the science, general computational approaches, and mathematical models for meeting the requirements, including the major technical challenges. -List the candidate domain science effects that the code needs to include to meet the goals: identify the sources of knowledge about these effects and the maturity of that knowledge; identify the candidate computational approaches (finite volume, finite element, discrete particles,. ..), including the mesh and data structures; the candidate solution methods for solving the model; assess the risks that these approaches will prove inadequate and identify alternative approaches to mitigate those risks; and select the parallel programming paradigm. It is especially important to identify the multi-scale and nonlinearity issues and candidate strategies for handling them. 7. Develop preliminary estimates for the project schedule, the stafing level and skill mix, and the computer resources required to develop the code, to apply it and to analyze the results. (111) -These are necessary for defining the scope and the scale of the projcct. Without them, it will be difficult to evaluate the feasibility of the project or get support for it. These include development of high level work breakdown structures and tasks, project roadmaps and schedules and project costs. Simple word processing and spreadsheet tools are probably adequate at this point. More formal project management tools are not needed until detailed planning and project progress monitoring becomes important, and maybe not even then. The first JET plane and the pyramids were not built using project management software. 8. Get initial buy-in and support of the potential sponsors, customers, stakeholders and prospective team members. -Without buy-in and support of these stakeholders, the proposed project will not attract sufficient support to be feasible, and even if it does attract sufficient support to get started, it will not be ultimately successful. 9. Assess the high-level project risks and develop a preliminary risk mitigation and avoidance strategy. -Identifying the high-level risks and mitigating them is essential for project success. The historical record for both the IT
95
industry and CSE shows that between 113 and 213 of largescale code projects fail to meet their initial requirements within the planned schedule and resources4. All of the stakeholders need to understand the risks and their role in minimizing them. This stage involves knowledge of all four task categories, but needs detailed knowledge of task categories 111 and IV, software engineering and software project management, and computational algorithms and libraries. However, a note of caution is appropriate. Extensive use of software tools for project management is premature, and can be a serious distraction. Similarly, extensive assessment of algorithms and methods is also premature. A high-level plan and general code architecture is needed before detailed work can begin.
11. Develop the Computational and Project Management and Team Approaches for the Code project General Sofrware infrastructure tool particularly configuration management management(IlI), documentation (Ill), mathematics (IV), ..
requirements: (Ill), project computational
The time scale for this phase is 3 months to a year. This is the major planning phase. While some small scale projects may not need much planning, many code development projects ultimately reach cost levels that exceed $100M over the life of the project. In every other type of technical work, sponsoring institutions require detailed plans for how the work will be accomplished, goals met, and progress monitored. They have found that plans and monitoring of progress are essential for minimizing project risks and maximizing project success. CSE is no exception. Developing plans for CSE projects is challenging. The plans must incorporate sufficiently detailed information on the project tasks, schedule and estimated costs for the project to be monitored and judged by the project sponsors and stakeholders. At the same time, the plans must preserve sufficient flexibility and agility that the project can successfully “discover, research, develop and invent” the domain science and solution algorithms need by the project. This is also the time to do a lot of prototyping and testing of candidate modules and algorithms and to explore the issues of integrating modules for different effects, particularly modules for effects that have time and distance scales that differ by many orders of magnitude.
1. Define detailed goals and requirements -Base these on the results of the initial proposal developed as part of the Formulate Questions, Issues, and General Approach stage; gather further requirements from
96
2.
3. 4.
5.
the stakeholders. Make detailed studies of prior projects in the relevant science and engineering domain to assess what’s needed and what works, and what’s not needed and what doesn’t work. Get long-term commitments from the sponsor for the resources necessary for the project to proceed. -CSE projects usually have a long life cycle. It is important to get commitments that allow sufficient time for the project to achieve enough initial capability to prove its worth. A realistic schedule estimate is crucial. Ed Yourdon, a well known IT industry software engineer, wrote that overly optimistic schedules have killed more projects than any other cause5. The support needs to be continuous because major disruptions in funding support can be fatal. While it takes years to build up a good team, it takes only minutes to destroy one. Seek input from customers, sponsors and stakeholders as part of defining the detailed goals and requirements. Develop candidate high-level software engineering strategy and practices a. Specify the software engineering and software project management requirements (e.g. configuration management, task monitoring and reporting, etc.). b. Identify candidate software engineering practices (e.g. code development method, code review, codc tcsting, languagcs, data structurcs, algorithms, code style guidelines, methods for configuration management, etc.) (111) c. Develop the initial software project management strategy in depth6, including how tasks will be monitored and progress will be measured and reported. Design the project (I, 11, 111, IV) -This includes developing a work breakdown structure (task list), a draft schedule and resource estimate for the major project elements. a. Develop verification and validation strategy -Identify the validation needed and how and where the validation experiments will be done. Develop plans for making quantitative comparisons of code results with verification problem results, and with benchmarks with other codes.
97
Select candidate numerical algorithms and programming model(s), especially parallel programming models, for the main code and for the modules. Develop a strategy to provide the capability to C. link packages, modules and libraries written in many different languages (Fortran, C, C++, Java, Perl, Python, etc.). d. Specify the mesh and data structures, including how the meshes and grids will be generated. e. Design the code and code architecture -The design must emphasize: Sufficient flexibility and agility to accommodate the need for many changes; Transparency; Potential for evolutionary and continual growth; Modularity; Ease of maintenance and Ease of use; portability; and code performance, both for single processor and parallel systems. f. Specify the necessary components, modules, and libraries. -Specify the dependencies of each component, module and library. g. Specify the dataflow paths. h. Develop initial interface requirements for components and modules. I. Identify the expected computing environment. -This should be done to the greatest extent possible, even when the expected computing environment hasn’t been designed or built. A target environment needs to be specified. J. Develop the initial approach to performance optimization. -Performance optimization should be a goal of the code architecture and design. It’s much easier to design for performance than to retrofit a code to improve the performance, especially parallel computer performance. k. Specify the approach for analysis and assessment of the computational results, including data storage and data handling requirements. Begin prototype module, algorithm and soffware infrastructure testing. (I, 11,111, IV) -Identify candidate prototype codes, modules, frameworks, physical datasets, algorithms, etc., and b.
6.
98
begin testing thcm to determine their capability and suitability for the project. Use the team as they are recruited to carry out these tasks. If a module or method looks promising, do some prototype exploration and testing to identify its strengths and weaknesses. Existing codes that handle the similar problems can provide a lot of useful knowledge and perspective. 7. Make a detailed assessment of the project risks and a detailed risk avoidance and mitigation plan -Any project worth doing has risks. Use the detailed project plan to assess the risks and then develop a strategy for minimizing risks and mitigating them. Re-evaluate the project plan in light of the risk assessment and make the changes necessary to further minimize the risks. Identify the key role of each individual stakeholder for minimizing each risk. Handling risks is essential for success. 8. Recruit the rest of the team -Try to keep a few of the planning team on the project to provide continuity and the necessity of the final team to re-plan the project. Select a project manager. The essential requirements for the project manager are: extensive domain knowledge and experience; computer and software engineering knowledge and experience; leadership and people skills; and project management skills and experience. All are essential. Build up the team carefully, and emphasize the need for the team to work as a cohesive unit. The skill sets that are needed include: domain knowledge and experience; software engineering; computational mathematics and algorithms; scientific programming; and documentation and writing. Teams develop software, not processes or plans or management.
99 111. Develop the code
General Software infrastructure tool requirements and best practices include: ongoing [documentation of scientific model, equations, design, code, components(IIl)], configuration management (III), project management(IIl), component design(lIl), Compilers(I), Scripts (I), code driver(l), Linker/loaders(l), Syntax and static analyzers(l), Debuggers(I), Verification & Validation (V&V) tools(lI), ..
Visualization Strategy
Control pkg.
Figure 6 Main code development strategy
This phase includes the development of the main code (Figure 6), runtime controller, individual modules (Figure 7), integration of the individual modules (Figure 7) physical databases, problem setup capability, and data analysis and assessment capability. It generally takes 5 to 10 years for the development of the initial capability of such a
project. The steps are summarized below. As the project evolves, the software project management plan will need to be kept current. Risk management is a key issue. If an approach does not work, then alternatives need to be developed and deployed. All of the development should be under strong configuration management. The development of each module should follow a clearly documented plan that describes the domain science, equations, and computational approach. The final module should be thoroughly documented. This is essential for future maintenance and improvements. 1. Develop code driver and control package. (ll} -Develop a strategy for controlling the execution of the code and implement it. This will also provide a framework for testing and integrating individual modules. 2. Develop Problem set-up capability(II) -Problem set-up, specifically generating computational meshes for problems of interest, is usually a major challenge7. It sometimes takes six months to a year to generate a mesh for very complex problems. 3. Develop detailed definitions f o r interfaces between the individual code modules and between the modules and the main code. 4. Develop main code (Figure 6 ) a. Write and Edit the main code. (I) b. Write and edit dummy modules for main code. a. Conduct Static analysis of links, syntax, data connections, etc.. ..(I) b. Compile the main code.(I) Link, load and build.(I) C. d. Link packages with libraries. e. Debug main code including issues associated with parallel message passing, race conditions, etc. (I) f. Integrate modules with main code as they are developed. 5. Develop individual components and modules (Figures 7 & 8) -The development and testing of individual components and modules is a sub-set of the development and testing of the whole code. a. Develop a detailed design for the component. b. Identify candidate computational algorithms for the module. (IV) c. Design the interface between the component and other parts of the code. d. Design and develop a stand-alone component driver(1)
101
Develop a V&V strategy for component, including defining test problems(I1) f. Make quantitative comparison of code results with experimental data, test problem results and other codes(I1). g. Write and Edit the modules. h. Conduct static analyses of links, syntax, data connections, etc.. ..(I) i. Develop and define code review process. (111) j. Compile the module.(I) k. Link, load and build module. (I) 1. Link component packages with libraries. m. Debug module or component.(I) n. Integrate the module into the main code. 0. Ensure compatibility with main code and other components (interface, V&V, integration strategy, checkpoint restart, output, etc.)(I,II,III,IV) 6. Optimize code performance (first through design, then through analysis of real performance, maximize cache performance, minimize message passipg conflicts and delays, level loads, etc.). Assess and test the ability of the computer hardware and operating system to compute and provide pe rformance information and tools to accumulate, extract, and analyze it.(I, IV) 7. Ensure hardware reliability and arithmetic accuracy(I) -Build checks and tests into the code to the greatest extent possible to verify that the computer hardware and operating system are working properly (i.e. the arithmetic, message passing, etc., are correct). 8. Ensure Checkpoint restart capability(I) -Develop and implement checkpoint restart capability. Store the least amount of information needed to restart the problem and minimize the fraction of the production run time and disk space devoted to storing restart files, while retaining the ability to restart from several successively older files in case the last one or two restart files were corrupted. 9. Develop and implement the capability to interface with candidate job scheduling algorithms ( I ) 10. Analyze and Assess the results, (including visualization and compute intensive analysis of results).(II) a. Develop and implement data storage algorithms and data formats to store results. b. Identify data analysis and assessment tools, and develop specialized tools as necessary. e.
102
-Utilize commercial and externally developed tools to the greatest extent possible. Development of these tools by the project can divert arbitrarily large resources from the main project, and good tools are often available from external sources, especially data analysis and visualization tools. c. Analyze, Assess and Visualize the output datathree levels i. Office Desktop: -Most effective place for development and production use. Identify tools 2nd define needs, procure data analysis tools (e.g. IDL,. ..) and high resolution visualization hardware and software, install and test it. ii. Small conference room for two or more team members. -Less Useful than desktop, but important for team building, code review, problem review, setup, and debug, identify tools and define needs, procure visualization hardware and software, install and test, develop ways for team to work together to review results. ... 111. Large theater for presentation of results to large audiences (15 or more audience members). -Mainly useful for presentation of results and conclusions to external bodies and management; identification of tools and definition of needs, procurement of visualization hardware and software, installation and testing of that capability, and development of ways for team to work together to review results. d. Identify required desktop and cluster data analysis tools, procure and test them, iterate with code development to provide the most useful data.(II) 11. Organize and monitor task progress and completion.(Ill) 12. For multi-institutional collaborations, develop real time conferencing in ofices with audio and video, for 2 to I0 people simultaneously.(Ill)
103
Figure 7 Component Development Strategy
Figure 8 Component Integration Strategy
IV. Perform Verification & Validation (V&V ) General Software infrastructure tool requirements and best practices: data analysis and visualization tools(II), tools f o r quantitative comparison of code results with test problem results, other code results and experimental data(l1). Verification provides assurance that the models and equations in the code and the solution algorithms are mathematically correct, i.e. that the computed answers are the correct solutions of the model equations. Validation provides assurance that the models in the code are consistent with the laws of nature, and are adequate to simulate the properties of interest'. V&V is an ongoing process that lasts the life of the code. However, it is particularly intense during
104
the development of the code and early adoption by the user community. Verification is accomplished with tests that show that the code can reproduce known answers and demonstrate the preservation of known symmetries and other predictable behavior. Validation is accomplished by comparing the code results for an experiment or observation with real data taken from the experiment or observation. A code must first be verified then validated. Without prior verification, agreement between experimental validation data and the code results can only be viewed as fortuitous. Without a successful, documented verification and validation program, there is no reason for any of the project stakeholders to be confident that the code results are accurate'. 1.
Verification
a. b. c. d. e.
2.
Define a set of verification tests and the verification methodology.(II) Set up regression test suites and run them(I1) Define unit tests and execute them (inc1uding"design by contract" approaches)(I) Identify conserved variables and monitor them (II), Define a set of benchmark calculations that can be run with similar codes and compare them with the project code results for a set of identical problems. (11),
Validation
a. b. c.
d.
e.
Define useful validation experiments and identify places that historical data can be found. Design validation experiments and the process for gathering historical data. Design a series of validation projects that involve coordinated code predictions, validation experiment design based on the predictions, validation experiments, and analysis of the validation experimental results using the code. Include strategies for using validation experiments and observations to quantify the uncertainties in the code predictions and the source of those uncertainties. Work with the experimentalists to carry out the validation experiments and gathering the data and/or gather historical data for validation. Make quantitative comparisons between the code results with experimental validation results for validation experiments and experimental observations.(II)
105
f.
Document the results and conclusions of the validation projects so that the project sponsors and all the stakeholders will have confidence in the code predictions and analysis and an understanding of the range of validity of the code predictions.
V. Execute production runs General Sofrware infrastructure tool requirements and best practices: Data analysis tools, visualization tools, (II), documentation (III), job scheduling (I). Running a large-scale simulation on a large supercomputer represents a significant investment on the part of the sponsor. Large codes can cost $5M to $10M per year to develop, maintain and run, and more if the costs of validation experiments are included . Large computers are expensive resources. Computer time now costs approximately $ I/cpu-hour (2006). A large project will need 5M cpu-hours or more. The total cost is thus in the range of $10M to $20M per year or more and hundreds of millions over the life of the project. It is analogous to getting run time on a large scale experiment, (e.g. getting beam time at an accelerator, conducting experiments, collecting data, analyzing data). Sponsoring institutions and large scale experimental facilities and teams have learned that research and design activities of this scale require organization and planning to be successful. The steps involved in a production run are listed below (Figure 9) 1.
Production Run Preparation: a. Define the goals of the series of computer production runs and the computer resources necessary for success, and the validation data necessary for obtaining confidence in the accuracy of the code for the problem of interest. b. Analyze the risks associated with the production run and develop a risk minimization and mitigation strategy. c. Identify all of the resources needed for the production run, including the computer time, the number of processors and memory, the data storage requirements, the preferred computer platform, the auxiliary software and software infrastructure, and the support from the development team.
106
d.
Develop a checkpoint restart stratcgy for the run and test it. e. Plan what data will be produced, what data needs to be analyzed during the run and then discarded, what data needs to be archived for analysis after the run and how the archival data will be stored and analyzed. f. Secure a commitment for the computer time, number of processors and memory, and data storage necessary to successfully complete the production runs. Plan the run schedule to make best use of available run time. To the greatest extent possible, develop strategies for a series of runs where the goal of each run depends on the results of the prior run. Setup the problem (generate meshes, develop input data and verify its correctness, document input files and code version,..). (11,111) -This is often a long-lead time item. It may take months to set-up a complicated run, and the run schedule should reflect this with allowance for potential schedule delays if setup takes longer than scheduled. Ensure that the physical data libraries are valid(1) 1. j. Schedule runs -This is especially challenging for jobs that use a significant fraction of the platform (>20%).(I) k. Execute smaller-scale runs on smaller computers to identify the runs that require a large computer. Use the small-scale runs to test the problem setup and identify potential problems with the larger-scale run. Follow these with short runs on the large platform to identify further difficulties.
107
Figure 9 Schematic for a production run
2.
Execute the Production Run a. Capture and archive all of the conditions for the run and the history of the run with the results of the run. (I$) -Include all the information about the computer and operating system, version of the code, input files, etc. Without this information it will rapidly become impossible to judge the validity of the code results because the results depend on all these factors. b. Execute and monitor the runs. (I) C. Monitor and optimize performance during run.(I, 11) d. Ensure hardware reliability and arithmetical accuracy.(I) e. Store the output data (input, output, code version and conditions) for future analysis.(II)
108
3.
Conduct preliminary analysis uf output data during run to monitor the progress of the run and to ensure that the run is getting the required results. (IJI).
VI. Analyze computational results from production runs Typical production runs from a large-scale computational project can produce TeraBytes or more of data. Analysis of the computational results is not only essential but can also be challenging. 1. Begin analysis during run to optimize run, store and visualize/analyze results, (11) 2. Analyze and assess the results using visualization and data analysis tools. (11) 3. Document the results of the analysis. (IlJIl) 4. Develop hypotheses based on the analysis. 5. Test hypotheses with further runs. (11) 6. Ensure that code is validated f o r the problem regime of interest. (II) 7. Decide what data needs to be archived, how long it needs to be archived, and archive it. Include all of the relevant conditions of the run with archived data, including possibly a binary copy and listing of the code and input file used f o r the run. VII. Make Decisions The whole purpose of the computational system of computers, codes and results analysis is to provide information for the basis of decisions for scientific discovery, engineering design, prediction of operational conditions, etc. Often this is an iterative process. Analysis of the initial results will suggest several conclusions. Further production runs, and possibly further code development, will be needed to confirm those conclusions and suggest modifications. Finally, the decisions and the basis of the decisions need to be documented. 1. Make draft decisions based on initial results and the analysis of the initial results. Identib further runs and work needed to confirm or modify those decisions. 2. Develop plan to reduce uncertainties and resolve open questions, 3. Ensure that code is validated in regimes that are important f o r the decisions(& I l l ) 4. Document and present the supporting analysis f o r the final decisions(III) 5. Identify further questions and issues that should be pursued with further work in the future.
109
Summary On the basis of many case studies of successful and unsuccessful Computational Science and Engineering Projects we have identified the steps that such projects follow from initial concept to final conclusions and decisions based on the results of the project. While none of the projects explicitly followed all these steps in an orderly fashion, the successful ones followed these steps either explicitly or implicitly. Key points emerged from the case studies. 1. The level of resources involved in computational science and engineering is becoming large, and the potential impact of the decisions is large. A higher degree of organization and formality is therefore becoming important, both for the technical success of the projects and for ensuring that the decisions reached as a result of the projects are correct and accepted by all the stakeholders, especially the sponsors. 2. Development of large scale projects is a highly iterative enterprise and is, in many ways, a research and development activity. While the project must be organized and run like a project, highly prescriptive formal designs and processes in the usual Information Technology sense and rigid software management processes are not practical. 3. Development and application of large scale computational science and engineering projects is challenging. Existing development and application support tools are relatively immature compared to the scale of the challenge. Thus there are many opportunities for software and hardware vendors to develop support tools that can reduce the challenge and facilitate code development and production.
Acknowledgments The authors are grateful for suggestions and support from Jeffrey Carver, Susan Halverstein; Andrew Mark, Dolores Shaffer, and Susan Squires; Jeremy Kepner, Robert Lucas; and the members of the various code projects who allowed us to learn from their experiences; Frederick Johnson, Department of Energy Office of Science; Cray Henry, Department of Defense High Performance Computing Modernization Program; and Robert Graybill, the leader of the DARPA HPCS program.
110
References I
2
3 4
9
D. E. Post and R. P. Kendall, International Journal of High Performance Computing Applications 18,399 (2004); D. E. Post, R. P. Kendall, and E. M. Whitney, Report No. LA-UR-05-1593, 2005. M. J. Christensen and R. H. Thayer, The Project Manager's Guide to Sofnyare Engineering's Best Practices. (IEEE Computer Society, Los Alamitos, CA, 2001). NASA, httu://www.mscsoftware.com/~roducts/nastran.cfni. D. E. Post, R. P. Kendall, and R. F. Lucas, in Advances in Computers, edited by M. V. Zelkowitz (Elsevier, Amsterdam, 2006), Vol. 66, pp. 239; Kweku Ewusi-Mensah, Sofnyare Development Failures: Anatomy of Abandoned Projects. (MIT Press, Cambridge, Massachusetts, 2003). Ed Yourdon, Death March. (Prentice Hall PTR, Upper Saddle River, NJ, 1997). Tom DeMarco, The Deadline. (Dorset House Publishing, New York, New York, 1997); Rob Thomsett, Radical Project Management. (Prentice Hall, Upper Saddle River, NJ, 2002). Joe F. Thompson, Bharat K. Soni, and NigelP. Weatherill, Handbook of Grid Generation. (CRC Press, Baca Raton, 1998). William Oberkampf and Timothy Trucano, Progress in Aerospace Studies 38, 209 (2002); Patrick J. Roache, Verification and Validation in Computational Science and Engineering. (Hermosa Publishers, Albuquerque, 1998). D. E. Post and L. G. Votta, Physics Today 58 (l), 35 (2005).
PROGRESS WITH THE GEMS PROJECT ANTHONY HOLLINGS WORTH European Centre for Medium-Range Weather Forecasts, Shinjield Park, Reading, Berkshire, RC2 9AX, U.K. Email: Anthony.Hollingsworth @ecmw$int
The GEMS Project The GEMS project (Global Earth-system Modelling using Space and in-situ data) is an Integrated Project funded under the EU’s initiative for Global Monitoring for Environment. The aim of the project is to extend the modelling, forecasting and data assimilation capabilities used in numerical prediction to problems of atmospheric composjtion. This will deliver improved services and products in near-real time (e.g. global air quality forecasts to provide boundary conditions for more detailed regional air-quality forecasts). In addition the operational analyses and retrospective reanalyses will support treaty assessments (e.g. the Kyoto protocol on greenhouse gases and the Montreal protocol on the ozone layer) and the joint use of satellite and in-situ data will enable sources, sinks and transports of atmospheric constituents to be estimated. The project involves about thirty institutes in fourteen European countries and has an EU contribution of 12.5 million Euro. It will run for four years from spring 2005 to spring 2009 with coordination carried out by ECMWF. The objectives of GEMS fall into two categories.
+
The global elements of GEMS are to produce by 2009 a validated, comprehensive, and operational global data assimilation/forecast system for atmospheric composition and dynamics.
112
+
The regional elements in GEMS are to assess the value of information o n long-range trans-boundary air pollution for operational air-quality forecasts in Europe.
The core operational products of GEMS will be gridded data assimilation and forecast fields with high spatial and temporal resolution of key atmospheric trace constituents. These will include greenhouse gases (initially including COz, and progressively adding CH4, N20, plus SF, and Radon to check advection accuracy), reactive gases (initially including 03,NOz, SO2, CO, HCHO, and gradually widening the suite of species) and aerosols (initially a 15-parameter representation, later - 30). The GEMS Annual Assembly convened at ECMWF on 6-10 February 2006 to review progress since the start of the project, and to make plans for the coming 18-months. The Assembly was organised by Horst Bottger (ECMWF), Olivier Boucher (Met Office), Guy Brasseur (Max-Plank Institut fur Meteorologie), Henk Eskes (KNMI), Anthony Hollingsworth (ECMWF), Vincent-Henri Peuch (MCtCo-France), Peter Rayner (LSCEAPSL) and Adrian Simmons (ECMWF). Based on discussions at the Assembly, we will now describe some of the developments which have occurred in the first year of GEMS and outline future plans.
Progress in the first year of GEMS Progress on data issues Early in 2005 the Canadian Meteorological Service circulated for comment a draft proposal on extension of the BUFR format to encompass atmospheric chemistry observations. A revised proposal was discussed by the relevant WMO technical committee in December 2005. After further revision it is expected to
113
be adopted as the WMO format for real-time international exchange of air chemistry measurements. Within the GEMS project, considerable work has been done to reconcile the differing data format requirements of the operational partners, who prefer BUFR and GRIB formats, and the research partners who prefer netCDF. A means to accommodate the needs of both communities is being developed. The GEMS project has had considerable help from major Space Agencies, including ESA EUMETSAT, NASA and NESDIS, in the acquisition of the very large amounts of satellite observations needed by the project. As the data are acquired the observations are reformatted in BUFR and archived at ECMWF. Progress on global modelling and data assimilation
Substantial efforts have been devoted to extending the modelling needs of the project. ECMWF’s Integrated Forecast System (IFS) has introduced the generic capability to advect many (-100) trace species by the model’s dynamics, and to transport them in the parametrisations, such as the convection parametrisation. In-line parametrisations have been implemented for greenhouse gases and aerosols, with surface fluxes specified climatologically (CO2) or dynamically (aerosols). Year-long test runs with specified meteorology and free-running chemistry have provided valuable checks on the models (see F~gure1). For reactive gases it is essential that the assimilating model has the benefit of an advanced chemistry scheme. Since it is believed premature to introduce a fullblown chemistry representation into the IFS, the IFS model has been coupled to the three participating Chemistry Transport Models (CTMs). At the time of writing the coupling has been achieved technically for two of the three CTMs,
so attention is moving from technical issues of the coupling to assessing the scientific issues raised by the possible mis-matches or dislocations introduced by the coupling.
114 Seasonal Cycle SurfaceC02 - Northern Hemisphere
3 w r r r j t
-I
41 372
1
3701
ECMWF Forecast Model I
I
I
Feb
Mar
I
Apr May
I
I
Jun
I
Jul
I
Aug
I
Sep
I
Oct
I
Nov
Dec
Seasonal Cycle SurfaceCO2 - Southern Hemisphere
.------t
ECMWF Forecast Model
380 378 376
372 370
Feb Mar
Apr May
Jan
Jul
Aug
Sep
Oct
Nov
Dec
Figure 1 Comparisons between NOANCMDL surface flask measurements of COz and a year-long run of the ECMWF model where the meteorology is corrected every 12 hours and the C02 is free-running, with specified climatological surface fluxes. The figure shows good qualitative agreement for the seasonal cycle. (Courtesy R.Engelen and S. Serrar).
115
A key requirement of the GEMS modelling and assimilation capability is an accurate representation of the stratospheric Brewer-Dobson circulation, which is involved in the control of the stratospheric distribution of many stratospheric constituents, and in key aspects of tropospheric-stratospheric exchange. There is evidence that there have been important improvements in this regard since the completion of the ERA-40 reanalyses in 2002. Consequently the meteorological components of the preliminary GEMS system have been used to reanalyse 2003-2004. Preliminary results are encouraging. The IFS's 4D-Var system has been adapted to provide three separate data assimilation systems for greenhouse gases, reactive gases and aerosols. Depending on which of the domains is addressed, the assimilation systems will use radiances via fast forward models and their adjoints (greenhouse gases initially, aerosols later), or retrieved profiles (aerosols, reactive gases) or total column amounts. The specification of natural and anthropogenic emissions is a key issue for both the global and regional elements of the GEMS project. Agreement has been reached on the use by GEMS of the global anthropogenic emissions calculated by the RETRO project of the Fifth Framework Programme. Emissions by wildfires and biomass burning are a key issue for the GEMS project. A proposed approach to the issue was developed recently through discussions between the HALO, GEMS, GEOLAND and ACCENT projects. Efforts will be made to include the issue in the Work Programme of the Seventh Framework Programme. Progress on regional modelling and assimilation The GEMS regional models will consider a common European domain (35"N70"N; 15"W-3S0E), or a larger area, for ensemble activities and inter-
116
comparisons. Vertical and horizontal resolutions depend upon the model: many will start with 20-50 km resolution with a target resolution of 5-20 km. Nested domains at higher resolution will also be developed. The main goal of the regional activity is to enhance and improve Regional Air Quality (RAQ) forecasts, hindcasts and analyses across Europe through the use of information on long-range trans-boundary air pollution. All ten GEMS regional models
(1;igure 2 ) have demonstrated good progress in building-up their GEMS-RAQ configuration. An area of common concern is surface emissions. The EMEP (Steering Body to the Cooperative Programme for Monitoring and Evaluation of the Long-range Transmission of Air Pollutants in Europe) inventory of anthropogenic emissions is the up-to-date reference in Europe, and is generally of good quality. For GEMS, the main limitation of the EMEP dataset is the resolution of -50 km which does not meet all RAQ requirements in terms of a good temporal resolution (hourly, weekly, monthly and annual) over the GEMS European domain and at a 5 km resolution. The creation of a dataset of European emissions, shared and used by a large number of groups involved in regional airquality forecasting, would represent an important step. The GEMS Management Board will arrange the preparation of such a dataset through a sub-contract. An important goal for GEMS is to provide coordinated access to air quality verification data across Europe for near-real time operations and to exploit the hindcasts. Consequently efforts will be made to agree a Memorandum of Understanding on data and forecast exchange for purely scientific and technical objectives with air-quality agencies.. Also there is a need for preparation of similar agreements for the post-2009 phase involving institutions such as regional and national agencies and the European Environment Agency (EEA).
117
METOUK United Kingdom CNRS, INERIS France FMI Finland
DMI Denmark NKUA Greece ISAC Italy met.no Norway
Affer month ?8
FRIUUK Germany
Affermonth 18 I
GEMS central site (ECMWF)
I I
Partners sites Across Europe
Mont* ,2
Figure 2 Illustration of the data flows between the central site and the GEMS regional modelling partners.
In preparation for pre-operational near-real time daily forecasts, work is progressing on the definition of methodologies for meaningful evaluation and comparison of partner hindcasts and forecasts over the GEMS domain. Included will be metrics for assessing forecasts of basic chemical species and metrics specific to user communities (e.g. air quality indices for human health and crop damage, and metrics for city level forecasts). Plans are also in preparation for software development based on 'VerifyMetPy' system developed at ECMWF which will allow central verification and user-tailored metrics. One of the goals of GEMS is to assess the value of the GEMS data in epidemiological studies of the public health effects of long-range aerosols and reactive gases. Preliminary studies are being planned to identify the types of health effects that can be meaningfully studied using GEMS-RAQ data.
118
Next steps in the development of GEMS
Plans for research and development 'I'ahle I illustrates the main phases of the work of the production team, based on the plans of the global modelling and assimilation partners, and of the regional partners. After further validation in the course of 2006, three separate global reanalyses of the study period 2003-2004 will begin later in 2006 with separate assimilation systems for greenhouse gases, reactive gases and aerosols and. With completion expected in mid-2007, the reanalyses will be subjected to elaborate validation and check-out before being exploited in a number of ways. The validation of the first reanalyses will lead to preparation of a second integrated reanalysis of the same period and scheduled to begin in late 2007. At the same time the integrated system will be the basis for development of a preoperational system which will be designed to be ready for operational implementation in the first half of 2009. Znstitutional arrangements needed for a transition to operations in 2009
Institutional arrangements are not yet in place for a transition of GEMS to operational status in 2009. Discussions with the EU are expected to begin in 2006 in the context of the preparation of an atmospheric service for implementation in 2009 as part of Global Monitoring for Environment and Security (GMES). The EU has already begun work on three GMES services for implementation in 2008, and it is expected that the atmospheric service preparation will follow a similar template. Issues to be considered include governance and definition of service level agreements for core services; for down-stream services consideration needs to be given to issues such as data policy and data access.
119 Table 1 Main phases of the work of the GEMS production team, based on the plans of the global modelling and assimilation partners, and of the regional partners.
Activity Year 1
*
May 2005-Aug 2006
Build and validate three separate assimilation systems for greenhouse gases, reactive gases and aerosols.
*
Acquire data; build web-site.
Year 2
Produce three different reanalyses for greenhouse
Aug 2006-Aug 2007
gases, reactive gases and aerosols.
Year 2-2.5
-
Make reanalyses available for validation by all partners.
-
Provide feedback to data providers.
-
Merge the three assimilation systems into a unified
Aug 2007-Jan 2008
system. Upgrade the models and algorithms based on experience.
Year 2.5-3.5 Jan 2OO8-Nov 2008
-
Build operational system and interfaces to partners. Produce unified reanalyses for greenhouse gases, reactive gases and aerosols.
Year 3.5-4
*
Nov2008-May2009
-
Carry out final pre-operational trials. Prepare documentation and scientific papers.
Satellite data provision in 2009-2019 The availability of adequate satellite data provision is a key issue in planning the first decade of operational GEMS activity. In terms of security and adequacy of satellite provision, the greenhouse gas project probably has the most secure provision with operational advanced sounders (IASI in 2006 plus GOME-2 on
METOP, CrIS on NPP in 2009) for upper-tropospheric measurements and the
120
research OCO and GOSAT missions from 2009 onwards. The least secure provision is probably the air-quality (lower-tropospheric chemistry), as no missions are planned beyond the demise of ENVISAT and AQUA. The satellite provision for aerosols and UTLS (upper troposphere-lower stratosphere) are comparable, with aerosols relying mainly on the VIIRS instrument on NPP and NPOESS and the UTLS chemistry relying on GOME-2 on METOP and OMPS on NPOESS (from 2012).
Final thoughts The GEMS Assembly showed that significant progress has been made with the project since spring 2005. This is due to the high level of expertise and commitment amongst the partners coupled with effective international collaboration between various research groups and project teams. There is every reason to be confident that by May 2009 the GEMS project will deliver a new European operational system which can monitor the composition, dynamics and thermodynamic of the atmosphere and produce forecasts of greenhouse gases, reactive gases and aerosols. More information about the GEMS project can be found at: www.ecmwf.int/researc hEU-projects/GEMS
VARIATIONAL KALMAN FILTERING ON PARALLEL COMPUTERS
H. AUVINEN, H. HAARIO AND T. KAURANNE Lappeenranta University of Technology, Department of Mathematics, P.O.Box 20, FI-53851 Lappeenranta, Finland E-mail:
[email protected],
[email protected], tuomo.
[email protected] We analyze the analysis quality and computational complexity of a novel approximation t o the Extended Kalman Filter (EKF), called the Variational Kalman Filter (VKF) on both serial and parallel computers. VKF replaces the most time consuming part of the standard EKF algorithm, the transformation of the prediction error covariance matrix, with a dynamic low-rank approximation t o it that is much cheaper to transform. In VKF, this low rank approximation is generated from the search directions and gradients of successive 3D-Var assimilation cycles. Each search direction stays on for only as long as the innovation vector has a significant component in that direction. Search directions and gradients are kept in vcctor form and evolved with the tangent linear model, as dictated by the Kalman formulas. It turns out that the serial complexity of VKF is a small multiple of that of 4D-Var, and that its parallel complexity may be even lower than that of 4D-Var on realistic parallel supercomputers. Numerical tests with VKF applied to the nonlinear Lorenz’95 benchmark demonstrate that VKF performs as well or better than full EKF. The Kalman smoother version of VKF, called the Variational Kalman Smoother (VKS), provides analysis quality even superior to that of E K F in retrospective analysis.
1. Benefits of Kalman Filtering
The general data assimilation (state estimation) problem for a nonlinear dynamical system with discrete observation steps t = 1 , 2 , 3 , ..., n contains an evolution or prediction equation and an observation equation:
where M t is a model of the evolution operator, xt is the state of the system at time t and Et is a vector valued stochastic process representing model error. The second equation connects the state estimate x ( t ) to the mea-
121
122
surements y(t) by way of an observation operator Kt, with an associated stochastic process of observation errors e t . The optimal linear estimator to the data assimilation problem is the Extended Kalman Filter (EKF). Because EKF expects the model t o be inaccurate, as shown by the error term in equation (l),it will compensate for model bias of similar magnitude, if there are observations available that do not have any bias in the same direction as model bias. This sets EKF apart from strong constraint four-dimensional variational data assimilation (4D-Var). EKF analyzes are discontinuous, but they can be smoothed out afterwards by a sweep of the Fixed-Lag Kalman Smoother (FLKS). The analysis error covariance matrix, an approximation to which is generated by all Kalman filtering methods, has many potential benefits. It can be used for assessing predictability. Its singular vectors can be used as initial perturbations to Ensemble Forecasting, if the time interval over which they have been computed is appropriate.
2. Feasible Approximations to Kalman Filtering and
Parallel Computers EKF requires that in addition to evolving the model state with the nonlinear forecast model, all the columns of the analysis error covariance matrix be evolved back and forth in time, with the adjoint model and the tangent linear model in succession. This is prohibitively expensive for highdimensional systems, such as weather models, and we must find feasible approximations t o EKF. Standard strong-constraint 4D-Var l 3 l9 is the special case of EKF where the model is perfect and the analysis error covariance matrix is static. In Reduced Rank Kalman Filtering (RRKF) the covariance matrix is constantly projected onto a fixed low dimensional subspace and evolved only there. In Ensemble Kalman Filtering (EnKF) 7 , a set of random perturbations is evolved to form a statistical sample of the span of the analysis error covariance matrix. All proposed computationally feasible approximations to Kalman filtering have displayed some drawbacks. Because 4D-Var assumes the model to be perfect, it suffers from model bias and does not produce any new estimate to the analysis error covariance matrix. Furthermore, it is difficult to parallelize beyond parallelizing the forecast model. Ensemble Kalman Filtering (EnKF) suffers from the slow convergence of random sampling, and often from in-breeding of the error covariance estimate. It is fully par-
123
allelizable, though, because all the randomly chosen initial conditions are independent. Reduced Rank Kalman Filtering (RRKF) appears not t o yield much improvement to forecast skill in experiments l2 This may be explained by the observation that the slow manifold of the atmosphere does not fix any linear subspace of the space of atmospheric states. Instead, the longer the assimilation window, the more potential minima the corresponding assimilation cost function develops, presenting a "fractal" landscape of possible future states of the atmosphere. Weak Constraint 4D-Var over a long assimilation window allows the analysis t o deviate from the perfect model assumption, efficiently rendering the analysis independent of the initial condition it was started from. Yet even weak constraint 4D-Var appears to retain a dependency on either the model being close to perfect, or on the past assimilation trajectory so far. It is also an inherently serial algorithm.
'.
3. Design Principles of the Variational Kalman Filter
(VKF) The Variational Kalman Filter was first introduced in '. VKF - like EKF
- behaves like a continuous data assimilation method. It is robust against model error and it produces a good estimate to the analysis error covariance matrix. VKF is also highly parallel, as will be demonstrated further on. In VKF, the prediction error covariance is evolved on a low dimensional linear subspace that is allowed to evolve discontinuously, just like the state does in EKF and VKF alike. The rank of the subspace can vary, but it is kept below a given fixed bound. The inverse relationship between 3D-Var and 4D-Var Hessians and the corresponding analysis error covariance matrices l8 lo is used to maintain a temporally local Krylov space approximation t o the analysis error covariance matrix. Dominant singular vectors of the covariance matrix are retrieved from minimization search directions and corresponding gradients of a Krylov space based method. In a linear model, Krylov subspace methods, such as Conjugate Gradient (CG) or Lanczos methods, quickly produce good estimates to leading singular vectors. In a nonlinear, but locally linearizable, system such as primitive equation models, quasi-Newton methods are designed to build up a Krylov space approximation to the Hessian. This is particularly true of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method that can be seen
124
as recursive application of the Conjugate Gradient method on a Krylov space 17. In practice, Limited Memory BFGS methods (L-BFGS) have proven more successful than full BFGS in 4D-Var data assimilation. The nonlinear Conjugate Gradient method is an L-BFGS method with a memory of two past search directions. L-BFGS methods converge in a number of iterations that is independent of the resolution of the model. Typically only ten to twenty past search directions are retained. This may be an indication that analysis error is dominated by a small number of local structures - both temporally and in phase space - akin t o spatio-temporal wavelets, that locally span the "slow manifold" of atmospheric motions. VKF has been constructed based on this assumption.
4. The Structure of VKF Computations
VKF follows the overall structure of EKF that will be briefly described below. It turns matrix inversions in EKF into equivalent 3D-Var and 4DVar minimization problems, and it solves these minimization problems with an L-BFGS solver. VKF then uses the BFGS Hessian update formula to update its approximation to the inverse of the analysis error covariance matrix - i.e. the Hessian. The approximation t o the Hessian is kept at a fixed rank by dropping outdated vectors from it. Only the vectors that have been retained are evolved to the next observation timc stcp. A final 4D-Var sweep is sufficient t o smoothen the analysis, and also to improve its accuracy in the middle of the assimilation window
4.1. The Kalman filter algorithm
Let xest(t- 1) be an estimate of state x(t - I) and S e s t ( t - 1) be the corresponding error covariance matrix of the estimate. At time t the evolution operator is used t o produce an a priori estimate xa(t) and its covariance
Sa(t):
where SEt is the covariance of the prediction error Et. The next step is to combine x a ( t )with the observations y ( t ) made at
125
time t to construct an updated estimate of the state and its covariance:
+
G~= S,(~)K?(K~S,(~)K: Set)-' xest(t) = xa(t) sest(t) =
+ Gt(y(t)- Ktxa(t))
S a ( t ) - GtKtSa(t),
(5) (6)
(7)
where Gt is the Kalman gain matrix, which is functionally identical t o the maximum a posteriori estimator. In a more general case, when the evolution model and/or the observation model is non-linear, the Extended Kalman Filter (EKF) is required. 4 . 2 . Extended Kalman filter algorithm
The filter uses the full non-linear evolution model equation (1) to produce an a priori estimate: xa(t) = Mt(xest(t- 1)). In order to obtain the corresponding covariance S,(t) of the a priori information, the prediction model is linearized about xe,t(t - 1):
Mt
=
dMt(xest(t - 1 ) )
dX S,(t) = MtSeSt(t- 1)M:
(8)
+ SEt.
(9)
The linearization Mt of the model M t in equation (8) is computed as the tangent linear model, from which the adjoint model MT is obtained as its transpose. The observation operator is linearized at the time of the observation about the a priori estimate xa(t)in order to obtain Kt, which is then used for calculating the gain matrix:
+
G t = Sa(t)KT(KtSa(t)KT S e t ) - ' .
(11)
The evolution of the full covariance matrix expressed by the term
M t S e s t ( t- 1)MT in equation (9) is a computationally very expensive operation for large models. After this, the full non-linear observation operator is used t o update xa(t) and this is then used to produce a current state estimate and the corresponding error estimate:
xest(t) = ~ a ( t+) G t ( y ( t )- Xt(xa(t))). Sest(t)
= s a ( t ) - GtKtSa(t).
(12)
(13)
126
If the linearization of the observation operator a t x a ( t )is not good enough to construct x , , t ( t ) , it will be necessary to carry out some iterations of the last four equations.
4.3. The Variational Kalman Filter Algorithm The VKF method uses the full non-linear prediction equation (1) t o construct an a priori estimate from the previous state estimate:
x a ( t ) = M ( X e s t ( t - 1)).
(14)
The corresponding approximated covariance S,(t) of the a priori information is available from the previous time step of VKF method. In order to avoid the computation of the Kalman gain we perform a 3D-Var optimization with a Kalman equivalent cost function. As the result of the optimization, we get the state estimate x,,t(t) for the present time L.
The error estimate S e S t ( t )of the state estimate is obtained from the following formula that states the relationship between the Hessian of the minimization and the prediction error covariance matrix:
s,,,(t) = 2 ( ~ e s s ( t ) ) - l ,
(15)
where the matrix H e s s ( t ) can be approximated by using search directions of the optimization process. There is always also a static prior to which the dynamic low-rank approximation is added. In practice, we keep the Hessian rather than its inverse, and in vector form to boot. The required multiplications with the Hessian are carried out with the Sherman-MorrisonWoodbury formula. During the optimization another task is done: the tangent linear evolution model equation is used t o transfer search directions and their corresponding gradients to next time step t + 1. These evolved search directions and their gradients are then used to build an approximation to S,(t 1).
+
5 . The VKF Algorithm
Taken step-by-step, the VKF algorithm looks as follows (1) Start from a previous analysis xu@) (2) Solve the corresponding 3D-Var problem at time t , using the current estimate of the inverse of the error covariance matrix S,(t) as the metric
127
(3) Update ( S a ( t ) ) - ' with the search directions of the minimization, using the BFGS Hessian update formula, so as to come up with the estimate ( S e s t ( t ) ) - ' . The Hessian is kept in vector form and the BFGS formula is applied at every matrix vector multiply. (4) Evolve the state estimate x,,t(t) t o the next observation time t 1, storing the tangent linear and adjoint models Mt+l and MT++l, respectively,on the way (5) Transform the vectors that form the approximate Hessian with the linear transformation Mt+lS,,t(t)MT+lto produce an estimate to the inverse of the error covariance matrix (Sa(t+1))-'. Drop vectors that do not shrink significantly in size in this reverse-time transformation (i.e. do not grow in forward time) and add the prior and the noise term SEt+1. (6) If the vectors spanning the approximate Hessian above have lost too much of their orthogonality in the course of their evolution, re-orthogonalize the basis using the Gram-Schmidt algorithm. (7) Loop from step 2 until end of assimilation window (8) Optionally, as an outer loop, perform a 4D-Var assimilation over the entire assimilation window, using the sequence of analyzes x,,t ( t ) as the observations (9) Iterate the whole sequence if needed
+
The eighth step results in a continuous analysis trajectory that has been processed by the VKF analogue to the Fixed Lag Kalman Smoother FLKS. We have called this method the Variational Kalman Smoother (VKS). It does not change the analysis at the final time, but it does improve the accuracy of the analyzes during all intermediate steps, as has been verified in experiments. VKS is particularly attractive for reanalysis. 6. The Serial Complexity of VKF Computations
Let us look at the serial computational complexity of the VKF method, step by computational step (1) No computations yet (2) Standard 3D-Var at every observation time (3) Matrix update with a fixed number of vectors of the size of model resolution. Complexity is linear in spatial resolution and independent of the time step (4) A single model run to the next time step, storing the coefficients of
128
the tangent linear and adjoint models on the way ( 5 ) A number of adjoint and subsequent tangent linear model runs, once back and forth for every vector, with a fixed (a few, possibly a few dozen) number of independent initial states, plus a sparse matrix vector product by the Hessian for each. ( 6 ) Standard linear algebra with a fixed number of vectors of the size of model resolution. Complexity is linear in spatial resolution and independent of the time step (7) The steps above are taken over the entire assimilation window, instead of just between observation times (8) Standard 4D-Var over the assimilation window (9) No iterations needed in the trials so far Looking at the complexities and summing over all the steps, we see that the dominant serial cost in VKF consists of the advection of the covariance vectors back and forth, of the 3D-Var steps, and of the optional 4D-Var minimization at the end. Altogether, every vector of the inverse of the error covariance matrix stored requires two model or adjoint model integrations (in fact, one with the nonlinear model for the state and two for every retained 3D-Var search direction, old or current, one with the tangent linear one and one with the adjoint one). If we were to keep the rank of the Hessian at just the number of most recent search directions, the overall complexity of the whole VKF algorithm would amount just to at most two subsequent 4D-Var minimizations over the entire assimilation window. This is so, because the 3D-Var minimizations can be seen to be part of a single model integration with the 4D-Var algorithm in any case, and because 4D-Var requires as many model and adjoint model integrations as it takes steps to converge. If it proves desirable to maintain a higher rank approximation to the Hessian, the total serial complexity of VKF is multiplied by this rank, divided by a typical number of steps needed by 4D-Var t o converge. An educated guess would put the total serial complexity of VKF at two t o five times the complexity of 4D-Var, and growing linearly with the resolution, just like the complexity of 4D-Var does.
7. The Parallel Complexity of VKF Computations Let us now turn our sights on the parallel computational complexity of VKF
129
(1) No computations yet (2) Standard 3D-Var at every observation time. No gain from parallelism. (3) Matrix update with a fixed number of vectors of the size of model resolution. Complexity is linear in spatial resolution and independent of the time step. No easy gains from parallelism. (4) A single model run t o the next time step, with the standard tangent linear and adjoint coefficient stores on the way. No parallelism. (5) A number of adjoint and subsequent tangent linear model runs, once back and forth for every vector, with a fixed number (a few, possibly a few dozen) of independent initial states, plus a sparse matrix vector product for each. These can be fully parallelized, since we keep the approximate Hessian in vector form. All these are independent, and can be carried out in parallel, should we have free processing nodes, or rather node clusters. Operational parallel weather models are often run on only a subset of the clusters of processing nodes of a parallel supercomputer. If a sufficient fixed number of free clusters is available, the parallel complexity of VKF is just that of a single forward run of the weather model and a single backward run with the adjoint. The speedup is linear. (6) Standard linear algebra with a fixed number of vectors of the size of model resolution. Complexity is linear in spatial resolution and independent of the time step. No parallelism. (7) The steps above are taken over the entire assimilation window, instead of just between subsequent observation times (8) Standard 4D-Var over the assimilation window (9) No iterations needed in the trials so far
Many supercomputers in current operational weather forecasting centres, such as the IBM Cluster at ECMWF, or the 70 Tflops Cray XT4 that is being installed at CSC in Finland in 2007, or the 100 Teraflops Cray XT4 ordered by NERSC, have a clustered structure. These parallel computers have dozens of Teraflops of computing power, and the operational weather models have been parallellized in a scalable fashion. However, parallel supercomputers based on commodity processors have a significant bottleneck in their inter-cluster communications. Because of Amdahls's and Hockney's laws ", this limits the bisection bandwidth of the machines so badly that operational models are often run within a single cluster of processors only. (The Earth Simulator is a notable exception!)
130
VKF has an ideal structure to fit itself neatly onto such a parallel architecture, with potentially close to linear speedup. Looking at the parallel complexities of the individual steps, and summing over all the steps, we get a surprising result: the parallel complexity of VKF is equivalent to just three model runs, apart from the final 4D-Var smoothing step. All serial iterations are just 3D-Var ones and do not involve model integrations. Moreover, the last 4D-Var iteration does not change the analysis at the final time step. This means that if we are t o launch the next forecast from it - and the experiments so far indicate this to be desirable - we can postpone the 4D-Var to be carried out afterwards, outside the operational cycle, for archival and reanalysis purposes. We arrive therefore at a rather striking conclusion: the parallel complexity of VKF in the operational cycle is just three model runs, one with each model: the nonlinear one, the tangent linear one and the adjoint one. It is also independent of the rank of the Hessian approximation, as long as this remains modest. If the Hessian is kept in vector form, all matrix vector products with it are fully parallelizable. VKF is therefore faster than even the standard 4D-Var on a sufficiently powerful - yet realistic - parallel computer. 8. Simulated Assimilation Results in Lorenz’95 Case
We are very grateful to Mike Fisher and Martin Leutbecher of ECMWF for providing us with their codes for the Lorenz’95 model and the weak formulation 4D-Var and EKF data assimilation algorithms for it 14. The assimilation results presented below are generated using the simple non-linear model introduced by Lorenz l 5 in 1995. As the model is computationally light and it represents simplified mid-latitude atmospheric dynamics, it is commonly used for testing different assimilation schemes. The variables of the model can be taken to represent some quantity of the atmosphere on a latitude circle. 8.1. Lorenz’95 model and parameters
The model consist of a set of coupled ordinary differential equations ci
+ F,
where i = 1,2,...,n and F is a constant. The number of grid points is controlled by the number n. The domain is set to be cyclic, so that c-1 = c , - ~ , c o= c, and c,+1 = el.
131
The simulations presented in this section follow Lorenz and Emanuel 16, so we select F = 8 and take a unit time interval to represent 5 days. The number of grid points was set to n = 40. The time integration of the model was performed using a fourth order Runge-Kutta method with time-step At = 0.025. The ground truth was generated by taking 42920 time steps of the RK4 method, which corresponds to 5365 days. The initial state at the beginning of the data generation was cZo = 8 + 0.008 and c' = 8 for all i # 20. The observed data is then computed using this true data. In particular, after a 365 day long spin-up period, the true data is observed at every other time step at the last 3 grid points in each set of 5; that is, the observation matrix is m x n with nonzero entries [KITS =
{
1 ( T , S ) E ((3j + i , 5 j 0 otherwise.
+ i + 2) 1 2 = 1 , 2 , 3 , j
= 0 , 1 , . . . ,7,},
8 . 2 . Simulated assimilation results
In order to compare the quality of forecasts started from analyzes produced with EKF to that of forecasts started from analyzes produced with VKF and VKS, respectively, we compute the following forecast statistics at every 8th observation after a 240 time step long burn-in period. Take j E Z := ( 8 i I i = 1 , 2 , .. . ,loo} and define 1
[forcast-error,],
=
-lJjM4z(~j5L) -X 40
~
~
.. . ,20, ~i =~1 ,J J ~ (17) ,
where Me denotes a forward integration of the model by l time steps with the RK4 method. This vector gives a measure of forecast accuracy for an analysis produced with EKF and VKF, respectively, up to 80 time steps ahead, or 10 days out. This allows us to define the forecast skill vector [forecast-skill], =
(18) The lower the value, the better the skill (somewhat illogically). In Figure 1, we compare forecast skills obtained with EKF and VKF using a perfect evolution model. The skill obtained with either method is quite similar. A similar comparison between EKF and VKS with different lags is illustrated in Figure 2. Results shows that VKS improves the quality of retrospective analyzes quite significantly from that of EKF.
132
0
0
I
I
I
I
I
I
I
I
1
2
3
4
5
6
7
8
forecast range (days) Figure 1. Forecast skill comparison of EKF method (red line) and VKF method (blue line) in Lorenz’95 system.
Next, we introduce a bias to the evolution model by using the incorrect parameter value F = 9 in the Lorenz’95 equations during the assimilation. In Figure 3, we compare the forecast skill obtained with EKF and VKF, respectively, when using a biased model. A similar comparison between EKF and VKS with two different lags is illustrated in Figure 4. Conclusions in both cases are similar to the ones obtained with an unbiased model. The skill difference between EKF and retrospective VKS is smaller than in the case with an unbiased model, though. 9. Conclusions
We summerize the conclusions of the analysis and the benchmark results as follows: (1) The Variational Kalman Filter (VKF) method provides as good analyzes as EKF in the Lorenz’95 benchmark, but it has a much lower computational complexity.
133
1.2
I
I
I
I
I
!
I
I
4
54'
1-
0
-
--
0.8 -
VKS: lag=l VKS: lag=2 VKS: lag=3
3 v)
c.
01
(doc-
0
!!
0
c
0.4 -
0' 0
I
1
I
2
I
3
, 6 7 forecast range (days) I
4
5
I
I
,
8
I
I
9
10
Figure 2. Forecast skill comparison of EKF method (red line) and VKS method (blue lines) in Lorenz'95 system.
VKF is robust against model bias and model noise alike. The Variational Kalman Smoother (VKS) outperforms EKF, both in accuracy and in computational complexity, in retrospective analysis. VKS has a serial complexity comparable to that of 4D-Var. VKF has a parallel complexity comparable to that of three subsequent model runs with vazallelized model, and has the potential t o outperform even the standard 4D-Var in wall-clock time on a large parallel supercomputer. In principle, linear speedup with such a parallel complexity looks attainable on current parallel supercomputers. The Variational Kalman Filter will be pr ed and analyzed in more detail in a forthcoming article by Auvinen et a1 '. Acknowledgements. Harri Auvinen gratefully acknowledges the support from Vilho, Yrjo and Kalle Vaisala Foundation.
134
Variational Kalman Filter vs. Extended Kalman Filter 1.41
0
1
2
3
I
I
I
I
r -
4
5
6
7
8
9
10
forecast range (days) Figure 3. Forecast skill comparison of EKF method (red line) and VKF method (blue line) in biased model case. '
References 1. M. Fisher and E. Anderson: Developments in 4D-Var and Kalman filtering. E C M W F Techical Memoranda 347 (2001). 2. H. Auvinen, H. Haario and T. Kauranne: Optimal approximation of Kalman filtering with a temporally local 4D-Var in operational weather forecasting, Proceedings of the 11th E C M W F Workshop on Use of Hzgh-Performance Computing in Meteorology ( W . Zwiefelhofer, G . Mozdzynski (eds.)), World Scientific (2005). 3. H. Auvinen, J. Bardsley, H. Haario and T. Kauranne, Stable data assimilation with a biased model using a Variational Kalman Filter. To be submitted. 4. D. P. Dee: Simplification of the Kalman filter for meteorological data assimilation. Q. J. R. Meteorol. SOC. 117 (1991), 365-384. 5. J. Derber: A variational continuous assimilation technique. Mon. Weather Rev. 117 2437-2446 (1989). 6. M. Ehrendorfer, A. Beck: Singular vector based multivariate normal sampling in ensemble prediction. E C M W F Technical Memoranda 416 (2003). 7 . G.Evensen: Sequential data assimilation with a non-linear quasi-geostrophic model using Monte Carlo methods t o forecast error statistics. J. Geophys. Res.
135
Variational Kalman Smoother vs. Extended Kalman Filter 1.41
I
I
I
--
I
I
I
I
I
I
5
6
7
8
9
VKS: lag=2
Y
0
1
2
3
4
10
forecast range (days) Figure 4. Forecast skill comparison of EKF method (red line) and VKS method (blue lines) in biased model case.
97 17905 - 17924 (1994). 8. M. Fisher, M. Leutbecher, G. A. Kelly: On the equivalence between Kalman smoothing and weak constraint four-dimensional variational data assimilation. Q. J. R. Meteorol. SOC.131 (2005), 3235-3246. 9. M. Fisher: Development of a simplified Kalman filter. ECMWF Technical Memorandum 260 (1998). 10. M. Fisher and P. Courtier: Estimating the covariance matrices of analysis and forecast error in variational data assimilation. ECMWF Technical Memoranda 220 (1995). 11. R. Hockney and I. Curington: f1/2: a parameter to characterize memory and communication bottlenecks. Parallel Computing 10 (1989), 277 - 286. 12. L. Isaksen, M. Fisher, E. Andersson: The structure and realism of sensitivity perturbations and their interpretation as 'Key Analysis Errors'. ECMWF Technzcal Memoranda 445 (2004). 13. F.-X. LeDimet and 0. Talagrand: Variational algorithms for analysis and assimilation of meteorological observations: theoretical aspects. Tellus 33A 97-110 (1986). 14. M. Leutbecher: A data assimilation tutorial based on the Lorenz-95 sys-
136 tern European Centre for Medium-Range Weather Forecasts W e b Tutorial, uwu.ecmwf.int/newsevents/training/lecture_notes/pdf-files/ASSIM/Tutorial.pdf
15. E. N. Lorenz: Predictability: A problem partly solved. Proc. seminar on Predictability, Vol. 1, ECMWF, Reading, Berkshire, UK, 1-18, (1995). 16. E. N. Lorenz and K. A. Emanuel: Optimal sites for supplementary weather observations: Simulations with a small model. J . Atmos. Sci., 5 5 , 399-414, (1998). 17. L. Nazareth: A relationship between the BFGS and conjugate gradient algorithms and its implications for new algorithms. SIAM J. Numer. Anal. 16 (1979) 5, 794-800. 18. F. Rabier, P. Courtier: Four-dimensional assimilation in presence of baroclinic instability. Q. J. R . Meteorol. Soc. 118,649-672 (1992). 19. F. Rabier, H. Jcvinen, E. Klinker, J-F. Mahfouh and A. Simmons: The ECMWF operational implementation of four dimensional variational assimilation. Part I: Experimental results with simplified physics. Q. J . R. Meteorol. SOC.126,1143-1170 (2000).
PREPARING THE COSMO-MODEL FOR NEXT GENERATION REGIONAL WEATHER FORECASTING AND COMPUTING
u. SCHATTLER Deutscher Wetterdienst, Geschiiftsbereich fur Forschung und Entwickhng, Postfach 100465, 63004 Offenbach, Germany
E. KRENZIEN and H. WEBER Deutscher Wetterdienst, Geschaftsbereich ftir Technische Infmstruktur, Postfach 100465, 63004 Offenbach, Germany In the last decade, parallel computers have enabled meteorologists t o build nonhydrostatic regional weather forecasting models that can run with a resolution below 10 km. Next step in this direction is the development of convection resolving models with a resolution of 1-3 km. Because of uncertainty in deterministic forecasts, ensemble prediction systems based on these very high resolution models are now also evaluated. This paper reports on the latest work done for the COSMO-Model t o prepare it for future weather forecasting tasks. The computational resources are estimated and the HPC strategy for DWD until 2012 is described. Version 4.0 of the L M F U P S benchmark, an important tool to evaluate HPC platforms, is introduced. An outlook is given on non-NWP activities with the COSMOModel. Keywords: Parallel computing, Numerical weather prediction; Ensemble prediction systems, Benchmarks
1. Introduction
In the late go’s, the Deutscher Wetterdienst (DWD) started the development of a new regional component of its numerical weather prediction (NWP) system. This new component was the nonhydrostatic model, called Lokal Modell (LM). Soon after the operational implementation of the LM, it was also used by other European national weather services within the Consortium for small scale modelling (COSMO) and is now called the COSMOModel. The COSMO-Model, which is described in Steppeler et al. [l],solves
137
138
the set of equations for a nonhydrostatic full compressible atmosphere on a spherical coordinate system (A,$) with a rotated pole. A split-explicit method (after Klemp and Wilhelmson [2] is used for time stepping, where the gravitational and sound waves are treated by a smaller time step than the advection processes. The model is implemented in Fortran 90 and pardlelized for distributed memory parallel computers using the Message Passing Interface (MPI) and the domain decomposition as parallelization strategy. A more detailed explanation of the parallelization can be found in Schattler et al. [3]. The nonhydrostatic formulation of the equations together with the availability of an enhanced computing power by parallel computers enabled scientists to go t o resolutions finer than 10 km on a reasonably large domain. The computational requirements of the first operational implementation of the COSMO-Model at DWD are described in Schattler and Krenzien [4]. Since then, the COSMO-Model has been further developed to include more prognostic variables and more sophisticated algorithms in the dynamics, the physical parameterizations and the data assimilation. With this, also the computational requirements have increased. Section 2 gives some highlights of the work on the COSMO-Model and how it is related to the increase of computing power. In Section 3 we give an outlook on future developments and their computational requirements. An important role for evaluating new and existing High Performance Computing (HPC) architectures is the benchmark version of the COSMO-Model, called LMRAPS, where RAPS stands for Real Applications o n Parallel Systems. In Section 4 LMRAPS-4.0 is described together with the HPC strategy of DWD for the next years. Section 5 finally gives an outlook on other activities using the C0SM0-Model. 2. Developments for the COSMO-Model
The first operational implementation of the COSMO-Model at DWD was on a CRAY T3E1200 with 456 processors, which was procured in 1997. To meet the computational demands of new model developments, several upgrades of this machine had already been planned at the time of purchase. First, more processors had been added, and in 2002 it was replaced by an IBM SP system with 80 Nighthawk I1 nodes, based on the Power3 processor. The IBM system had been enlarged to 120 nodes in 2003, and was replaced by a system based on Power5 processors in 2005. In the beginning of 2006, this system was doubled, to split the operational workload from research and development. In this section we want to highlight some developments for
139
the COSMO-Model that made the described hardware upgrades necessary. 2.1. Prognostic Precipitation
The first new developments after the operational introduction of the model have been in the microphysics scheme. The treatment of cloud ice has been added to the parameterization for grid scale precipitation in addition to the existing water categories, namely water vapour, cloud water, rain and snow. Like water vapour and cloud water, cloud ice has been formulated as a prognostic field. In a second step, prognostic fields have also been introduced for rain and snow, which were handled diagnostically before. Then, the conservation equations for rain and snow were approximated stationary and without advection. This column equilibrium approach meant that precipitation particles, arising from cloud microphysical processes, immediately fell down to the bottom of the column in the same time step. In reality, rain drops with a mean fall velocity of about 5 m/s, which develop for example at a height of 3 km, need a falling time of 10 minutes. If a horizontal wind speed of 10 m/s is assumed, the rain drops are drifted by 6 km. For snow with a mean fall velocity of about 1 m/s (and usually generated higher up) the horizontal drifting is even more efficient. For a grid length of 7 km and a time step of 40 s, which are the usual values for todays operational applications of the COSMO-Model, the column equilibrium approach is no longer valid. And it is even more violated for higher resolutions. This was tested by case studies especially dedicated to the formation of precipitation in mountains; in many cases there is too much precipitation on the upwind side, and too little in the lee. The solution of this problem was particularly important for hydrologists, because precipitation could fall to the false catchment and was therefore not added to the correct river. With the treatment of rain and snow as prognostic fields, which we call prognostic precipitation, an advection scheme was introduced and the above shortcomings could be cured to a large extent, as can be seen in Figure 1, which shows the accumulated precipitation over the Blackforest in Southwest Germany. In the run without prognostic precipitation (left), the precipitation on the upwind side of the blackforest is by f a r overestimated, while the run with prognostic precipitation (right) gives a much better representation of the precipitation field. For the prognostic treatment of rain and snow it is necessary to use an advection scheme which remains stable up to vertical Courant numbers
140
+
Fig. 1. Precipitation forecast for 20 February 2002,OOUTC 6-30 h over Southwest Germany without (left) and with (right) prognostic precipitation. Observa, tions axe shown in the middle. The isolines indicate the model orography.
of about 3. Because the model layers near the ground are so thin (about 60 m), that with the currently used time step of 40 s, particles can fall through up to three layers within one time step. Therefore it was decided to use a three-dimensional Semi-Lagrange (SL) scheme (e. g. Staniforth and C6tQ, [ 5 ] ) whose stability does not depend on the Courant number. But Semi-Lagrange advection is a rather time-consuming process. Together with some other improvements of the model, this lead to an increase of computation time by more than 20 percent.
2.2. ~ e ~ e l o ~of~ ae Kilometer-Scale n t ~ W ~ - S ~and $ t e ~ ~ ~ ~ ~ eMethods - K ~ ~ t a For very detailed short range forecasts, a meso-? version of the GOSMOModel has been developed in the last years. At DWD this system runs under the acronym LMK, where the “K” stands for the german word K ~ ~ z e s ~ ~ which means nowcasting. It will utilize grid spacings of 2-3 km and therefore reach the convection resolving scale. With this new system it is intended to fill. the gap between traditional nowcasting methods for severe weather events (up to 3-6 hours) and current short-range NWP with grid spacings of about 10 km and forecast ranges up to 2-3 days. For the numerics part, several variants of the 3rd-order in time RungeKutta integration have been implemented. The first one is the normal 3rdorder Runge-Kutta scheme used by Wicker and Skarnarock [6]whereas the second one is a total variation diminishing (TVD) variant of 3rd-order after Liu, Osher and Chan 171. This scheme can easily be combined with
141
the standard time-split forward-backward methods to integrate fast compression waves and furthermore allows for flexible use of high-order spatial advection operators. From the latter, we expect noticeable benefits for simulating processes such as deep convective cloud evolution which is at or close to the grid-scale. Using a 5th-order advection scheme, the new dynamical core allows for a time step almost twice as large as with the standard Leapfrog/2nd-order centered differencing scheme of the COSMOModel. The main reason for applying the new time scheme, however, was not to save CPU time, but to achieve a more accurate and thus much better converged numerical solution at neutral computational costs. Nevertheless, the introduction of the new system again put additional workload t o the computer. This is because of developments in the physical parameterizations. Turbulent transport becomes essentially 3-d at very high resolution, e.g. lateral exchange across cloud boundaries will be important for the evolution and organization of deep convection. A new 3-d turbulence scheme based on turbulent kinetic energy using a non-isotropic closure for fluxes has been implemented (Herzog et al, [S]). A more comprehensive treatment of the ice-phase is also important when simulating deep clouds directly. Therefore, the microphysics scheme has been enhanced to include graupel (and later on hail) as an additional precipitation category. The computational costs for the 3-d turbulence scheme raises the overall expenses for the model by about 30 percent. 2.3. Assimilation of Radar Data
The new kilometer-scale system will be run every 3 hours from a continuous data assimilation stream based on the observational nudging technique. Such a rapid update cycle will require a short data cut-off (less than 30 minutes) and the successful use of available non-synoptic remote sensing data. Therefore, the assimilation of radar reflectivities has been introduced to the COSMO-Model. Because this is not a prognostic variable in the model, a direct assimilation is not possible, but it is essential for the development of precipitation. Attempts have t o be done in order to assimilate radar information into the model by the use of any other variables (e.g. temperature, specific humidity or components of the wind vector). Thus a relation between precipitation rate and prognostic model variables is needed. Concepts based on processes, normally present in the context of precipitation, are desired. One special process connected with the formation of precipitation is the condensation of water vapour. It is directly linked to the release of latent heat. Originally,
142
most condensation processes must be considered as the formation of cloud droplets, but this is only a preliminary stage of the precipitation forming. Nevertheless it is possible to influence the model dynamics and consequently the formation of precipitation by adjusting the model-generated latent heat release. The diabatic heating rates, which are related to phase changes of water, are tuned in such a way, that the model simulates the observed precipitation rates. This is realized by adding temperature increments to the 3D temperature field. This method is called Latent Heat Nudging (e.g. Wang and Warner [9]). An introduction to LHN, further references to literature about this method and some aspects of the special implementation of the LHN algorithm in the COSMO-Model can be found in Leuenberger and Rossa [lo]. For a 3 hour assimilation run, the LHN takes about 15 percent of the computation time. The effect on the precipitation field can be seen in Figure 2. On the left side the model forecast without the LHN in the assimilation suite is shown. It is evident, that the forecast based on the assimilation suite including the LHN (right), can represent the local structure of the precipitation much better.
+
Precipitation forecast for 26 August 2004, 00 UTC 6-30h over Germany without (left) and with (right) latent heat nudging in the assimilation. Observations from the 16 german radars are shown in the middle.
Fig. 2.
2.4. Operational Production
The operational application of the LMK-system puts new demands on the computer system. Every 3 hours a 21 hour forecast is run for a domain with 421 x 461 grid points covering the whole of Germany with a grid
143
spacing of 2.8 km. In the vertical, 50 levels are used. One forecast takes about 25 minutes using 512 MPI Tasks on 256 physical Power5 processors with symmetric multi threading (SMT). This is roughly 2/3 of the full machine. Together with the assimilation cycle and all other operational models, there is nearly no free computer time available any more. Therefore it was decided, t o double the computer and split the operational work load from research and development. 3. Next Generation Regional Weather Forecasting
In the last years, ensemble prediction systems have been introduced in global modelling. Sometimes this technique is regarded to be a revolution; see e.g. Palmer [Ill. It was made possible by theoretical advances in the understanding of predictability, but also by the developments in supercomputer technology. There are several reasons, why deterministic forecasts are uncertain. It is not possible t o determine the initial state exactly, and also there is a n uncertainty in the external parameters. Another important cause of errors is the model itself. Besides coding errors, that might exist in most programs, the model cannot capture all processes of the atmosphere, depending on the resolution used. Therefore, one deterministic forecast can lead to wrong results; a situation that surely has been experienced by most modellers. The idea behind ensemble prediction is, to run several different deterministic forecasts and thereby determine the probability of the results. How to make the forecasts (the members of the ensemble) different, is a theory of its own. An approach widely used in global modelling is the perturbation of the initial state, e.g. by the singular vector technique used at ECMWF (see Buizza et al. [12]) or by the ensemble transform Kalman filter, investigated at NCEP (see Mozheng et al. [13]). For regional modelling there are even more degrees of freedom to make the members of an ensemble distinct, e.g. by using different boundary conditions. This technique is used for the COSMO-LEPS, the limited-area Ensemble Prediction System developed within the COSMO consortium. Its present setup comprises 10 integrations of the COSMO-Model, nested on selected members of the ECMWF EPS global ensemble. The domain used for the COSMO-LEPS includes all COSMO countries and a resolution of about 10 km is used. The main features of the COSMO-LEPS system are described in Montani et al. [14,15]and Marsigli et al. [16].A further development within the COSMO community in this direction is COSMO-SREPS,
144
a short-range ensemble system, which is developed to improve support to forecasters, especially in situations of high-impact weather. The strategy to generate the mesoscale ensemble members tries to take into account all the possible sources of uncertainty. The proposed system would benefit of perturbations in the boundary conditions, perturbations of the model and perturbations of the initial conditions. Another methodology is used at the Spanish Weather Service INM (Instituto Nacional de Meteorologia) , where a multi-model short range ensemble prediction system (MM-SREPS) has been implemented. The boundary conditions from 4 different global models are used to run 5 different regional models, giving an ensemble of 20 members. The regional models a’e run with a resolution of about 28 km. At DWD work has started to develop an ensemble prediction system based on the LMK, i.e. with a very high resolution of 2.8 km. This new ensemble system, which has the working name LMK-EPS, will also take into account the experiences from COSMO-SREPS and the Multi-Model SREPS of INM. It is planned to make use of the data from these systems. The computational demands of LMK-EPS will determine the size of the next computer generation at DWD, which will be procured during 2007. This new system has to provide enough computational resources for the LMK-EPS. The determining factors hereby are the costs of a single LMKEPS member and the number of ensemble members. A deterministic LMK forecast (i.e. a single LMK-EPS member) is running operationally at DWD since 16 April 2007 for a computational domain of 461 x 421 x 50 grid points, a resolution of 0.025’ (about 2.8 km) and with a time step of 25 seconds. A 21 hour forecast takes about 200 x 10l2 floating point operations. To finish the forecast in 20 minutes, a sustained power of about 166.6 GFlop/s is needed. This requirement scales (embarrasingly parallel) with the number of cnscmblc members. To run 20 members and some additional jobs (like the global model), we aim at a peak performance of about 100 TFlop/s for the new compute servers, which shall be delivered in two identical, but independent systems. One system will take the operational jobs, while the other one is a backup, which is available for research. 4. The L M R A P S benchmark
The performance of the new computer system will be checked with the LM-RAPS benchmark. The package for the LMRAPS-4.0 Benchmark consists of a subset of the COSMO-Model (the name “LM” still remains here)
145
and the interpolation program INT2LM, which interpolates data from a coarse grid model (GME from DWD, IFS from ECMWF or a coarse grid version of the COSMO-Model) to the (fine) grid. Both programs are parallelized for computers with distributed memory using the Message Passing Interface (MPI) as parallel library. The main interest of the benchmark focusses on the COSMO-Model. The main features of the COSMO-Model are also implemented in the RAPS version, to reflect as much as possible from the operational model version. With the new version LM-RAPS-4.0 it is possible t o run the deterministic LMK forecast. Therefore it can be used to test LMK-EPS, too. For DWD’s procurement, 30 instances of the deterministic forecast described above have t o be run on one compute server. 20 instances are for the 20 members of LMK-EPS and the 10 additional instances are to simulate the workload of the other jobs. This will require a sustained performance of about 5 TFlop/s. Because the LM-RAPS benchmark only gives about 10% of the peak performance for conventional RISC processors, this will result in a 50 TFlop/s peak performance for one compute server or 100 TFlop/s for both. On the IBM system of DWD (with 8 Power5 processors per node), we did some tests for the Operational domain of 421 x 461 x 50 grid points and a time step dt = 30 seconds. On a Power5-processor it is possible to run 2 MPI-tasks on one physical CPU. This feature is called “symmetric multi processing” (SMT). For the tests we used both versions, with and without SMT. Table 1 gives the measured floating point operations (with the IBM tool hpmcount) and timings for a 6 hour forecast. Table 1. Measurements for a 6-hour LMK forecast with LMRAPS-4.0.
# Nodes # MPITasks Total Time (s) Flop (10l2) GFlop/s
8 8
x8
1556.76 46.9 30.1
8 x 16
16 8 x 16 16 x16
1099.94 47.7 43.3
783.24 47.8 60.9
589.04 48.7 82.5
It is evident, that at least 512 Power5-processors will be used to give 166.6 GFlop/s sustained performance for one LMK-EPS member. With a more powerful processor, of course, less processors are necessary. The system that is purchased in 2007, will be installed at DWD in a new building during the first half of 2008. It shall take over the operational
146
production in summer 2008. An upgrade of that system is planned in 2010, but this depends on the available money and the offers of the vendors. 5. Outlook In the last years, the COSMO-Model has not only been used at the national weather services of COSMO, but also at many universities and research institutes. Some of them now went beyond numerical weather prediction. In Germany a new community formed, which developed the COSMO-Model further to use it also for long term climate simulations. This version is now known as Climate LM (CLM). All developments for the CLM have been implemented into the official version of the COSMO-Model, so that it supports now NWP and climate simulations. Further developments are in the fields of chemistry. Several groups already couple the COSMO-Model to their chemistry models and new applications will arise here. Both, climate simulations and coupling with chemistry models, will put new computational demands to the COSMO-Model in the next years.
Acknowledgments Many colleagues at DWD and in the COSMO Consortium participate in the development of the COSMO-Model. In particular, the work described in Section 2 has been carried out by Michael Baldauf and Jan-Peter Schulz for the prognostic precipitation, by Jochen Forstner, Michael Baldauf and Axel Seifert for the Runge-Kutta schemes, by Stefan Klink, Klaus Stephan, Christoph Schraff and Daniel Leuenberger for the Latent Heat Nudging. We would like t o thank them all for providing their material. We would also like to remember our colleague Giinther Doms, who implemented the first version of the model and who supervised all developments up t o his much too early and sudden death in 2004.
References 1. J. Steppeler, G. Doms, U. Schattler, H. Bitzer, A. Gassmann, U. Damrath and G. Gregoric, Meso-gamma scale forecasts using the nonhydrostatic model LM, Meteorol. Atmos. Phys. 82, 75 (2003). 2. 3 . Klemp and R. Wilhelmson, The Simulation of Three-dimensional Convective Storm Dynamics, J . Atmos. Sci. 35, 1070 (1978). 3. U. Schattler, G. Doms and J. Steppeler, Requirements and problems in parallel model development at DWD, Scientific Programming 8, 13 (2000).
147 4. U. Schattler and E. Krenzien, Performance requirements for DWD’s new models, in Proceedings of the Eighth ECMWF Workshop on the Use of Parallel Processors i n Meteorology, (Reading, UK, 1999). 5. A. Staniforth and J. CBtk, Semi-lagrangian integration schemes for atmospheric models - a review, Mon. Wea. Rev. 119,2206 (1991). 6. L. J. Wicker and W. C. Skamarock, Time-splitting methods for elastic models using forward time schemes, Mon. Wea. Rev. 130,2088 (2002). 7. S. Osher and T. Chan, Weighted essentially non-oscillatory schemes, J . Comput. Phy. 115,200 (1994). 8. H.-J. Herzog, G. Vogel and U. Schubert, Incorporating a 3 0 Subgrid Scale Turbulence Scheme into the 2.8km- Version of the LM, COSMO Newsletter 3, Deutscher Wetterdienst (P.O. Box 100465, 63004 Offenbach, Germany, 2003). 9. W. Wang and T. U’arner, Use of Four-Dimensional Data Assimilation by Newtonian Relaxation and Latent-Heat Forcing to Improve a MesoscaleModel Pricipitation Forecast: A Casestudy., Mon. Wea. Rev. 116, 2593 (1988). 10. D. Leuenberger and A. Rossa, Assimilation of Radar Information an aLMo, COSMO Newsletter 3, Deutscher Wetterdienst (P.O. Box 100465, 63004 Offenbach, Germany, 2003). 11. T. Palmer, Predictability of Weather and Climate: From Theory to Practice - From Days to Decades, in Proceedings of the Tenth ECMWF Workshop on the Use of Parallel Processors in Meteorology, (Reading, UK, 2003). 12. R. Buizza and T. Palmer, The singular-vector structure of the atmospheric global circulation, J. Atmos. Sci. 52, 1434 (1995). 13. W. Mozheng, Z. Toth, R. Wobus, Y. Zhu, C. Bishop and X. Wang, Ensemble transform kalman-filter based ensemble perturbations in an operational global prediction system at ncep, Tellus 58A, 28 (2006). 14. A. Montani, C. Marsigli, F. Nerozzi, T. Paccagnella, S. Tibaldi and R. Buizza, The soverato flood in southern italy: performance of global and limited-area ensemble forecasts, Nonbin. Proc. Geophys. 10, 261 (2003). 15. A. Montani, M. Capaldo, D. Cesari, C. Marsigli, U. Modigliani, F. Nerozzi, T. Paccagnella, P. Patruno and S. Tibaldi, Operational limited-area ensemble forecasts based on the Lokal Modell, ECMWF Newsletter Summer 2003 98, ECMWF (2003). 16. C. Marsigli, F. Boccanera, A. Montani and T. Paccagnella, The cosmo-leps ensemble system: validation of the methodology and verification, Nonlin. Proc. Geophys. 12,527 (2005).
A NEW PARTITIONING APPROACH FOR ECMWF’S INTEGRATED FORECASTING SYSTEM (IFS) GEORGE MOZDZYNSKI
European Centre f o r Medium-Range Weather Forecasts, Shinfield Park, Reading, Berkshire, R G 2 S A X , U.K. E-mail: George.MozdtynskiQecmwf. int Since the mid-90s IFS has used a 2-dimensional scheme for partitioning grid point space to MPI tasks. While this scheme has served ECMWF well there has nevertheless been some areas of concern, namely, communication overheads for IFS reduced grids at the poles to support the Semi-Lagrangian scheme; and the halo requirements needed t o support the interpolation of fields between model and radiation grids These issues have been addressed by the implementation of a new partitioning scheme called EQREGIONS which is characterised by an increasing number of partitions in bands from the poles to the equator. The number of bands and the number of partitions in each particular band are derived so as t o provide pa.rtitions of equal area and small ’diameter’. The EQREGIONS algorithm used in IFS is based on the work of Paul Leopardi, School of Mathematics, University of New South Wales, Sydney, Australia.
1. IFS parallelisation
The Integrated Forecasting System (IFS) at ECMWF consists of a large suite of software used primarily to produce a daily 10 day forecast. Key components of thc IFS include the processing of observations, a 4D-Var analysis and a 10 day forecast model a t T799L91 resolution which corresponds to a grid spacing of approximately 25 km with 91 levels. The key parallelisation scheme for IFS was developed in the mid 1990’s [l]\2]. This scheme consists of a series of data transpositions between gridpoint space, fourier space and spectral space as shown in figure 1. With this approach, the complete data required is redistributed a t these stages of a time step so that the computations between two consecutive transpositions can be performed without any interprocess communication.
148
149
A
a
. s
1
Fig. 1. IFS model timestep showing data transpositions
150
Fig. 2. 2 dimensional partitioning of grid-point space to 256 MPI tasks (T799 model)
The focus of this paper is the distribution of data in grid-point space where the bulk of the IFS computations take place in model physics and dynamics. This 2D distribution of grid-point space to 256 MPI tasks is shown in figure 2 using an Aitoff projection. Here each task is associated with a single partition. This projection allows us to easily see the main features of this distribution for the whole globe, with 16 N-S bands (or sets) and 16 E-W bands, where 16 is the square root of 256. For 512 tasks there is no exact square root, so the nearest factors of 512 are used, namely, 32 N-S bands and 16 E-W bands, as shown in figure 4. The reason for having more N-S bands than E-W bands in this case is due to application performance. Finally, figure 5 shows a case for 1024 MPI tasks. The Semi-Lagrangian scheme in the IFS requires data from neighbouring tasks, the width of this halo being a function of the maximum possible wind speed and the time step. It is clear that the ideal shape for 2D partitions to achieve the smallest halo area is square-like. However, the partitions in the figures for 256, 512 and 1024 partitions are more rectangular than square-like with a width to height ratio of 2:l or 4:1, and further have a nasty 'cake' feature for those partitions at the poles. These polar partitions present a problem as they not only require a large halo area, but also that
151
1
Fig. 3. C.Lemaire / J.C.Weil1, March 23 2000, Partitioning the sphere with constant area quadrangles, 12th Canadian Conference on Computational Geometry
this halo area requires communication with all partitions converging on the respective pole. Reducing the number of N-S bands and increasing the number E-W bands would make partitions more square-like at the equator, however, this would only make matters worse at the poles with the increased communication. Some attempts to resolve the polar halo problems were made by using a different partitioning strategy for the first and last partitions (i.e. a polar cap). However, these developments only made a small difference on performance and sometimes negative. What was required was a new approach for the partitioning of grid-point space. There are clearly many approaches to partitioning a sphere, a good example being [3] which uses constant area quadrangles as shown in figure 3. However, many of these approaches share little in common with the 2D IFS scheme and as a result would require a major effort (many person years) to incorporate into IFS.
152
Fig. 4. 2 dimensional partitioning of grid-point space to 512 MPI tasks (T799 model)
Fig. 5 . 2 dimensional partitioning of grid-point space to 1024 MPI tasks (T799 model)
153
Fig. 6. age.
Partitioning of a sphere into 33 regions using the EQREGIONS MatLab pack-
2. EQREGIONS
The EQREGIONS partitioning scheme [4] appeared attractive from the beginning, as can be seen in figure 6 for a sphere with 33 partitions. This practical algorithm came with a MatLab package titled 'Recursive Zonal Equal Area Sphere Partitioning Toolbox' which can be found at http://eqsp.sourceforge.net/. As a comparison IFS 2D partitioning would need 11 bands N-S and 3 bands E-W for this number of partitions. The reason why EQREGIONS was attractive is due to the similarity between the IFS 2D bands and what EQREGIONS calls collars - the partitioning of a sphere resulting in 2 polar caps plus a number of collars with increasing partition counts as we approach the equator. The description of the algorithm, and mathematical proof are described in great detail in [4]. This algorithm results in partitions of equal area and small 'diameter'. However, this would not be sufficient for an IFS implementation, as the density of grid-points on the globe varies with the latitude, the greatest density being at the poles and the least density at the equa-
154
1
65
129
193
257
321
385
449
partition
Fig. 7. number of grid-points per partition for T799 resolution using EQREGIONS 'area' partitioning for 512 tasks
tor. This imbalance has been measured at 13% for a T799 model with 512 partitions when using the EQREGIONS algorithm to provide the bounds information (start/end latitude, start/end longitude) for each partition as shown in figure 7. The solution to this imbalance issue was to use the EQREGIONS algorithm to only provide the band information, i.e. the number of N-S bands and the number of partitions per band. Then the IFS partitioning code would use this information in a similar way to that used for 2D partitioning, resulting in an equal number of grid-points per partition. With this approach there was only ONE new data structure (an single dimensioned array) used to store the number of partitions in each band. The results for this balanced EQREGIONS implementation can be seen in figure 8 for 256 partitions, figure 9 for 512 partitions and figure 10 for 1024 partitions. The characteristic features of this partitioning approach are square-like partitions for most of the globe and polar caps together with a significant improvement in the convergence at the poles. From a code point of view the differences between the old 2D paxtitioning and the new EQREGIONS partitioning are relatively simple. For the 2D scheme, there were loops such as,
155
Fig. 8. EQBBGIONS partitioning of grid-point space to 256 MPI tasks (T799 model)
DO JB=l,NPRGPEW DO JA=l,NPRGPNS ENDDO ENDDO where, NPRGPEW and MPRGPNS are the number of EW and NS bands (or sets). For EQREGIONS partitioning loops were simply transformed into,
DO JA=l,N-REGIONS-NS DO JB=l,N-REGIONS(JA) ENDDO ENDDO of N-S EQREGIONS bands, where, NREGIONSNS is the nu and NREGIONS(:) and array containing the number of partitions for each band.
156
Fig. 9.
EQREGIONS partitioning of grid-point space to 512 MPI tasks (T799 model)
Fig. 10. EQREGIONS partitioning of grid-point space to 1024 MPI tasks (T799 model)
157
In total some 100 IFS routines were modified with such transformations. It should be noted that the above loop transformation supports both 2D and EQREGIONS partitioning, i.e. to use 2D partitioning, a simple namelist variable would be set LEQREGIONS=F, which would result in the following initialisation,
N-REGIONS-NS=NPRGPNS N-REGIONS(:)=NPRGPEW 3. Radiation Grid
Radiation computations in the IFS (as in other models) are very expensive and to reduce their relative cost we both reduce the frequency of such computations (i.e. every hour for a T799 model) and run these computations on a coarser grid than the model grid. This coarse grid was initially a sampled model grid [5] but more recently a reduced grid (linear) was used giving in a small improvement in meteorological skill. As an example, for a T799 model grid (843,490 grid points) the corresponding radiation grid would be T399 (213,988 grid points), with an approximate ratio of 4:l.Of course to use this reduced radiation grid we must interpolate data from the model grid required for the radiation computations and after such computations interpolate data back to the model grid from the radiation grid, in both cases using 12 point bi-dimensional interpolations. Two approaches are possible for such interpolation. The first approach would be to send each field to a separate task using a global gather operation, perform the interpolation and then return data by using a global scatter operation. This approach has the disadvantage that global operations are relatively expensive on most computers and further when the number of fields (variables x levels) to interpolate is less than the number of tasks being used. The second approach, the one used in IFS, was to use a halo for the data required from neighbouring tasks and perform the interpolations locally. This approach was assumed to be better as all tasks are used for the interpolation and only local communication is used. The second approach also had an advantage from a code maintenance viewpoint as the routines needed for supporting the halos already existed for the SemiLagrangian scheme and could easily be used for the purpose of radiation interpolation. However, there was a downside to this second approach. The halos required for the above interpolations were much larger than one would expect particularly for large numbers of tasks. It was only after the geographic po-
158
sition of some partitions that required large halos were plotted that the problem was understood. Figures 11 and 12 show the model and radiation partitions (using 2D partitioning) for tasks 201 (Africa partition) and 11 (North pole partition) of 512 for a T799 model and T399 radiation grid. One would have expected partitions for two different grids to be at the same geographic position, as they use the same partitioning code. The reason this is not the case stems from the definition of these two grids, one is clearly not a projection of the other. Differences between these two grids are due to the requirements for equal spacing of grid points in each grid (with the exception of polar latitudes), and also due to the need for the number of points on the same line of latitude to be divisible by factors 2,3, and 5 - a requirement for the Fourier transforms (see figure 1). It is now interesting to see the effect of using EQREGIONS partitioning and how it addresses the above problems. To make a fair comparison tasks 220 and 4 were chosen that by comparison had a relatively large halo but also were located in comparable geographic positions (Africa and Polar) to those above. This can be seen in figures 13 and 14 respectively. The conclusion of this comparison is that EQREGIONS partitioning requires smaller halos than 2D partitioning for the purpose of interpolation for the same number of tasks. This translates into less data being communicated and therefore improved performance. To take this further figure 15 shows graphically the halo area (total halo grid points including partition’s own grid points) for all tasks for the purpose of radiation grid interpolation. The top two lines show the halo area required for interpolating from the model to the radiation grid and the bottom two lines for interpolating from the radiation t o the model grid. In both cases it can clearly be seen that EQREGIONS partitioning results in smaller halos when compared with 2D partitioning.
159
c,
I
' ,
ql
Fig. 11. 2D partitioning - model (blue, .mu radiation (red) partitions for task 201 (Africa partition) of 512 tasks for a T799 model/T399 radiation grid
160
2.
p i. .;
il
I
'
-A
-
r
I
Fig. 12. 2D partitioning - model (blue) and radiation (red) partitions for task 11 (Polar partition) of 512 tasks for a T799 model/T399 radiation grid
161
Fig. 13. EQREGIONS partitioning - model (blue) and radiation (red) partitions for task 220 (Africa partition) of 512 tasks for a T799 model/T399 radiation grid
162
t
I-
)-.
------
Fig. 14. EQREGIONS partitioning - model (blue) and radiation (red) partitions for task 4 (Polar partition) of 512 tasks for a T799 model/T399 radiation grid
163
L.
.
.
3
Fig. 15. Comparing EQREGIONS and 2D partitioning requirement for Halo Area (total halo grid points including partition's own grid points) for radiation grid interpolation (T799 model/T399 radiation grid).
164 4. 4D-Var
Besides the model and radiation grids presented earlier, there is another grid used in 4D-Var analyses, as part of the J B wavelet scheme [6] [7] within the minimisation (ifsmin) steps of a 4D-Var cycle. The grid used in this scheme is a ’full grid’, where all latitudes contain the same number of points. For the J B wavelet scheme many such grids of increasing resolution are used (the wavelet scales) from some minimum truncation (default being T21) up to the resolution of the minimisation step. As an example, for a T255 minimisation step, wavelet grids are used at T255, T213, T159, T127, T95, T63, T42, T30, and T21. This can present a problem when scaling to large numbers of tasks as the lowest trunction T21 only has 2048 grid points for a full grid (32 lats each with 64 points). Of course the performance of the JB wavelet code is dominated by the higher wavelet scales so one shouldn’t pay too much attention to the low resolutions. Unfortunately, when testing EQREGIONS partitioning it was exactly the lowest resolution that presented a problem, the problem being a constraint of IFS partitioning code that required that a line of latitude could not be split more than once between bands (or A-sets). For 2D partitioning this was not a problem. After some investigation and discussion with the author of the J B wavelet scheme a solution was found. The solution was simply to use reduced grids (preferably linear grids) instead of full grids. It was decided to implement this unconditionally as it was realised that the overall performance of a 4D-Var cycle improved by about 2% independent of whether 2D partitioning or EQlLEGIONS partitioning was used. The reason for this performance improvement was obvious, about 30% fewer grid points were used compared with full grids in the above scheme. The wavelet scales used were preset in some large data files, so it was not practical in this implementation to change these. Further, some of these scales were not linear grids (e.g. T213 is a reduced grid). In the near future it is planned to reset the wavelet scales so that all have corresponding linear grids. This is expected to further improve the overall performance of 4D-Var by an additional 1%.
5. Performance
In the following table the performance of a T799 forecast model and 4D-Var analysis are compared when using 2D and EQREGIONS partitioning. The advantage in using EQREGIONS was measured a t 3.9% for the forecast model and 2.7% for the 4D-Var analysis. For 4D-Var the 2% improvement gained by using reduced grids for the J B wavelet grids discussed in the
165
Application
tasks x threads
model 4D-Var
512 x 2 96 x 8
2D partitioning seconds 3648 3563
EQXEGIONS partitioning seconds 3512 3468
2D I EQREGIONS speedup 1.039 1.027
previous section is NOT included in the latter 2.7%. While the overall performance advantage for using EQREGIONS may only be a few percent, one should bear in mind that IFS applications have had many years of code optimisation and finding even a few percent is increasingly rare. ’Rich pickings’ are often found in new code, but less so with code that has been around a long time. By inspecting some low level timers in the IFS, it can be seen that some areas of code are now running faster due t o a combination of reduced halo sizes used for the Semi-Lagrangian scheme and radiation grid interpolation, and the associated reduction in memory use (always good for performance). However, it can also be seen that there is an increase in the TRGTOL and TRLTOG communications used for the transposition of data between grid point space and Fourier space (see figure 1 earlier in this paper). This increased communication in TRGTOL/TRLTOG is due t o the fact that the distribution of latitudes in Fourier space is a very close match to the N-S bands when using 2D partitioning. The transposition to Fourier space from grid point space also requires complete latitudes, so it is the levels dimension that is distributed in the 2nd dimension. For systems with many CPUs per node (such as ECMWF’s IBM p5-575 clusters) a relatively large part of this TRGTOL/TRLTOG communication can be performed within each node (when using 2D partitioning). As we know that intra node communication via memory is faster than inter-node communication via the Federation switch, this seems a reasonable explanation why 2D partitioning is better for this particular communication phase. This scenario naturally changes when truely large numbers of tasks (1000’s) or thinner nodes are used. In both these cases, most of the data in such transpositions (if not all transpositions) will need to be communicated via the systems switch, with further gains to EQREGIONS partitioning over 2D. Time will tell.
6. Acknowledgements
I would like to thank my colleagues Mike Fisher and Mariano Hortal for their kind support and advice. Also, I would like t o thank Paul Leopardi
166
[4], for his excellent work and paper, upon which the IFS EQREGIONS implementation is based.
References 1. S. Barros, D. Dent, L. Isaksen, G. Fbbinson, G. Mozdzynski and F. Wollenweber, Parallel Computing 21,1621 (1995). 2. L. Isaksen and M. Hamrud, ECMWF Operational Forecasting on a Distributed Memory Platform : Analysis, in Proceedings of the Seventh E C M W F Workshop on the Use of Parallel Processors in Meteorology, (World Scientific, November 1996). 3. C. Lemaire and J. Weill, Partitioning the sphere with constant area quadrangles, 12th Canadian Conference on Computational Geometry(March ZOOO), citeseer .ist .psu.edu/lemaireOOpartitioning.html. 4. P. Leopardi, A partition of the unit sphere into regions of equal area and small diameter, Electronic Transactions on Numerical Analysis 25, 309 (2006). 5. D. Dent and G. Mozdzynski, ECMWF Operational Forecasting on a Distributed Memory Platform : Forecast Model, in Proceedings of the Seventh E C M W F Workshop o n the Use of Parallel Processors in Meteorology, (World Scientific, November 1996). 6. M. Fisher, Background error covariance modelling, in Seminar o n Recent developments in data assimilation f o r atmosphere and ocean, September 2003, http://www.ecmwf.int/publications/library/ecpublications/-pdf/seminar/ 2003/sem2003-fisher.pdf. 7. M. Fisher, Generalized frames on the sphere, with application to the background error covariance modelling, in Seminar on Recent developments in numerical methods f o r atmospheric and ocean modelling, September 2004, http://www.ecmwf.int/publications/library/ccpublications/-pdf/scminar/ 2004/sem2004-fisher.pdf.
WHAT SMT CAN DO FOR YOU JOHN HAGUE IBM Consultant E-mail: h ~ t i d & u kihnr om
The PowerS+ system installed at ECMWF is, in many ways, business as usual. As a follow on from the Power4+ system, the user is not likely to experience many differences. However, under the covers there are a number of improvements that affect performance. In particular: Simultaneous Multi Threading (SMT).
The Need for SMT Moore’s law indicates a growth in computing power by a factor of 2 every 18 months. This generally applies to the peak performance of systems, but not necessarily to sustained performance. ECMWF (the European Centre for Medium-Range Weather Forecasts) run a large parallel program called IFS (Integrated Forecast System). This program has experienced a slightly lower rate of growth in its sustained performance (5 times every 4 years) than would be predicted by Moore’s law (6.3 times every 4 years). Fig 1 (courtesy of ECMWF) shows the historical and predicted rates of growth. Currently ECMWF have 2 clusters of IBM 1.9 GHz 16-core Power5+ nodes. Each cluster has 140 compute nodes giving a total peak performance of 34 TFLOP. All nodes are connected with 2 links to IBM’s High Performance (Federation) Switch. Running on the PowerS+ without SMT, IFS’s 10-Day Forecast was about 12% peak. In order to get as close to Moore’s law as possible there is a need for increased sophistication in both hardware and software. SMT (Simultaneous Multi Threading) is one of the techniques attempting to achieve this. For large parallel programs using large numbers of processors, the main reasons for not keeping up with Moore’s law are typically: The CPU performance increases much faster than memory access rates The floating Point pipeline length increases (for scalar processors) An increased number of CPUs produces more inter-CPU Communications
167
168
- I
100.000
/ 10.000
-1
U
-
.-2-m ul
a
1.000 n 0
-
; ;
Eal
0.100
0.010
I -
,
11
I
I
1997 1998 1999 2000
2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011
-Times 5 every 4 years
-Moore's Law
Fig 1: Growth in Sustained Performance at ECMWF
SMT does not address the last item in the list, but can potentially help to overcome hold-ups due to memory access and the length of Floating Point pipes (6 to 7 cycles on the PowerS+). So what are the main hold-ups for ECMWF's IFS? Statistics were collected using the Hardware Monitor, and the results of a CPI (Cycles Per Instruction) analysis for the whole program are shown in Fig 2. MAIN T
Cycles
A B C
Groups GCT
Stalls
1,049,275,053,531 371,422,635,466 0.354 16,540,437,851 0.016 661,311,980,214 0.630
. . . .
C Stalls c1 c2 c3 c4 I
LSU
m FPU
Other
661,311,980,214 194,682,702,732 65,549,096,175 340,351,526,592 60,728,654,715
. . . . Fig 2: CPI analysis for IFS on Powers+
0.186 0.062 0.324 0.058
169
Instructions complete in groups, and the CPI analysis shows why the groups are held-up ("Stalls" in the figure). The LSU Stalls are due to memory (or Cache) access delays, and the FPU Stalls are due to Floating Point pipeline delays. As you can see it is the latter that dominate.
What is SMT SMT enables multiple (actually 2 at present) threads to run on a single CPU. The key feature is that instructions are dispatched at the hardware level with minimal overhead. An instruction from the 2"d thread can by dispatched while the 1'' thread is waiting for a load instruction or FP pipe operation to provide data. The upside is that you can get up to 2 times performance improvement. The downside is that you may get Cache thrashing (as threads will have to share the Cache), or, more seriously, you may get paging (since threads will have to share the available memory). However we have found that, at least with IFS, doubling the number of threads with OpenMP only slightly increases the memory requirement. Fig 3 shows a simplified version of SMT operation on the PowerS+. A much fuller description is given in IEEE Micro; March-April 2004; Kalla, Sinharoy & Tendler IBM Power5 Chip, A Dual-Core Multi-threaded Processor.
5 instrns per group per cycle
8 instrris per cycle
\ Instruction
share120 rename
20
8 instrns per cycle
entries
Branch
i ) I I
1
Cache Group
IIIIII Issue Queues (Shared between threads)
Fig 3: Simplified Power5 Schematic of SMT
Floating Point
F1o-k I
I
Out of order Execution Units
L
Group Comp -letion
170
There are two separate Program Counters (for each of the 2 threads sharing the same physical CPU). Instructions are alternately selected and placed in a shared Instruction Cache. Each cycle, up to 8 instructions from the same thread are put into one of the Instruction Buffers. Thread Select selects 5 instructions from the same thread to form a group for the Issue Queues. Each group takes one entry in a shared 20-slot Global Completion Table (GCT). The Issue Queues (shared between threads) allocate the shared Rename Registers (120 FP & 120 GP). The Execution Units can execute up to 8 instructions, out of order, every cycle. Groups complete in order for each thread (one from each thread per cycle). In addition, Dynamic Resource Balancing aims to prevent one thread hogging the Issue Queues. It throttles a thread if it has too many L2 misses or GCT entries by reducing the thread’s priority, inhibiting the thread’s instruction decoding, and flushing the thread’s instructions waiting for dispatch. There is also an Adjustable Thread Priority mechanism which has software controlled priority levels. It is implemented by controlling the instruction decoding cycles. Priority is decreased if the thread is in an idle loop waiting for work, or a spin loop waiting for a lock. Priority is increased for a real time task.
SMT Implementation Each PowerS+ node has 8 dual-core chips, which is 16 physical CPUs. AIX “sees” 32 CPUs, and can allocate two threads to a physical CPU. Parallel programs can use MPI or OpenMP to double the number of threads. In order not to double the memory requirements it may be better to use OpenMP, rather than double the number of MPI tasks. Programs may be expected to benefit from SMT if the CPU pipes are not hlly utilised; memory bandwidth not fully utilised; and the program is scalable. Matrix multiply, for example, fully uses the pipes, and does not benefit from SMT.
Fig 4 illustrates how the scalability of parallel programs using SMT affects the speedup. SMT is only effective if the program is reasonably scaleable.
171
If one copy of program takes T T
One Copy
If SMT factor is S, 2 copies of program take 2*T/S 2 * TIS
I
Two Copies
If scalability factor for doubling threads iff, program takes 2 *T/f/S Double Threads
‘ 2 * TlfIS
I
For IFS T799 on 1200 CPU, S=1.35, f=l.8 - Speedup due to SMT is S*f/2 = 1.35*1.8/2 = 1.22
Fig 4: SMT for Parallel jobs
Binding It has been found that in order to get the best results, or indeed even to get repeatable results, it is necessary to bind a parallel application’s thread to specific physical CPUs. Physical CPUs are typically correlated to AIX “logical” CPUs according to:
Logical 0,16
Physical
2,18
0 1 2
15,32
15
1,17
. . .
Generally, when ECMWF run a large parallel program, they use whole nodes, so we do not have to worry about the interaction of different applications on the same node. If SMT is not required, allocations (to logical CPUs) should be 0 1 2 etc. This ensures that no more than one thread is allocated to a physical CPU. If SMT is required, allocations should be 0 16 1 17 etc.
172
Normally, we use a heterogeneous combination of MPI and OpenMP.’ With 8way Open MP, the lstMPI task should use CPUs 0 16 1 17 2 18 3 19, and the 2ndtask should use 4 20 5 21 6 22 7 23 etc. If, as is sometimes the case with IFS, there is not enough work for all OpenMP tasks to get involved, it may be best to specify CPUs 0 1 2 3 16 17 18 20 for the 1’‘ task, and 4 5 6 7 20 21 22 23 for the 2ndtask etc. Thus, if there is only enough work for 4 OpenMP threads, they will all use different physical CPUs. Memory Affinity (MEMORY-AFFINITY=MCM) should also be used, so that memory is allocated on the same chip as the CPU.
Measurements Fig 5 shows the GFLOPS that can be achieved on the Power5-t using a “pathological” test case. The code, which was of course inside a loop, was run with 16 OpenMP threads (so as to not use SMT) and 32 threads (to use SMT). Since bit reproducibility was required, the compiler could not automatically perform optimisations involving partial sums. Without SMT and without code optimisation, each iteration of the code required the whole 6 cycles needed to pass through the FP pipeline. This gave a maximum performance of 16~1.916= 5 GFLOP. If SMT is used, two threads can share the same CPU so that both FP pipes can be used and the GFLOP rate is doubled. If the code is unrolled by a factor of 2, both pipes can be used without SMT, which doubles the GFLOP rate. This is doubled again if SMT is used. If the code is unrolled .by a factor of 10 then multiple instructions can proceed through the pipes at the same time and the GFLOP rate is further enhanced.
173
Fig 5: SMT benefit on Powers+
The last two entries in the table show the effect of restrictions due to memory access. In the last case, performance is constrained by memory access both with and without SMT. ECMWF run the T399 resolution of IFS for the Ensemble Prediction Suite. Fig 6 shows the benefits of SMT, binding, and Memory Affinity for T399.
~~
~
Fig 6: T399 Results
174
The conclusions from these measurements are that for T399: 0 0
MEMORY-AFFINITY is worth a percent or two Binding is worth a few percent, particularly without SMT SMT is worth about 20%
These conclusions will be different for different applications of course. ECMWF run a 4D-Var version of IFS for their Data Assimilation. Measurements showed an “interesting” variation with time, which illustrates the difficulties that can be encountered in getting meaningful measurements. Each run of 4D-Var used 70 nodes and took about 40 minutes. More than 50 runs were made, one after the other. Fig 7 shows the communication time in the second minimisation step of 4D-Var as a function of run number. The time gradually increased until the system was rebooted when it dramatically decreased (run 32). The time then gradually increased again when monitoring was restarted (run 47). A very steep increase in time occurred when monitoring on each node was specifically separated by one second (run 53). The explanation for this is shown in Fig 8. The monitoring daemons, running on each of the 70 nodes cause a small amount of time (dt) to be added to each of the CPU times. Initially, after a reboot, these times are synchronised, and so the total delay is only dt. After a few hours the daemons are no longer synchronised, so the dt delay is incurred by each MPI task in a different part of the program. All tasks then suffer an extra dt delay at the next communication (i.e. synchronisation) point. So for N nodes the total delay is now N*dt. Once the monitoring was synchronised, and results were repeatable, it was possible to measure the effect of threads, SMT, and binding on the 2nd minimisation step of the Data Assimilation component of IFS (4D-Var). It was run on 70 nodes, varying the number of threads and MPI tasks so that all CPUs were used. Results are shown in Fig 9. Results were best for 2 threads without SMT and 4 threads with SMT. Both binding and SMT gave performance improvements of about 10%. Similar measurements were made with UKMO’s Unified Model on 8 nodes. The number of MPI tasks and thrcads was varied to use all 128 CPUs. Fig 10 shows the effect of threads, SMT, and binding.
175
0
20
10
40
30
50
60
Run number
Fig 7: ECMWF’s 4D-Var minl communication time as a function of run number
dt
Task O Task I Task 2--, Task 3-+
After several
-
Comms
Initially d
lCPU
1
H-H-H-H-
1-1 1 -1 Every minute: delay dt in CPU 1 H-M-H-M1 M-H-I-----I-*dt
dt
hours
-
Every minute (for N tasks): delay dt in CPU delay N*dt in Comms
H-H-M-M-
t
t
dt
t
dt
dt ~
t
dt
~
Fig 8: Effect of monitoring
176
2500
2000
1500
E I= 1000
500
0 0
1
I
I
1
I
1
2
3
4
5
Threads per MPI Task for noSMT, (Threads per MPI Task)/2 for SMT)
Fig 9: IFS 4D-Var 2nd minimisation: SMT and binding v threads
80 70 60 50 Iu
+5
40 30 20 10
0 0
1
2
3
4
Threads per MPI Task
Fig 10: UKMO Unified Model: SMT and binding v threads
5
177
Results were best for 1 thread without SMT, and 2 threads with SMT. Binding and SMT both gave about a 20% improvement. OpenMP threads were not as effective for UM as for IFS because I had only spent a month or so adding OpenMP, whereas ECMWF have spent the past few years adding OpenMP to IFS.
Conclusions The following conclusions may be slightly tongue in cheek - but are well worth paying attention to. In addition to SMT, it is important to implement binding and to ensure that monitoring, or a similar type of interference, is not degrading performance. Monitoring Synchronisation: you might be losing 5% performance without this (see Figs 7 and 9); Binding: you could be losing 2% to 20% without binding (see Figs 6, 9, and 10);
SMT: we were seeing 10% to 20% performance improvement with SMT (see Figs 6,9, and 10). In the extreme case, not paying attention to the above could be depriving an installation of a 50% performance improvement!
Acknowledgements I should like to acknowledge the work done by Jason Taylor and Will Weir of IBM, and Matthias Nethe and Oliver Treiber of ECMWF in setting up the system to enable me to make these measurements.
EFFICIENT COUPLING OF ITERATIVE MODELS *
R.W. FORD and G.D. RILEY and C.W. ARMSTRONG Centre for Novel Computing, School Of Computer Science, The University of Manchester, U.K. E-mail [rupert. ford,graham.riley, chris.armstrong]@manchester.ac.uk
This paper presents a coupling strategy for iterative models that provides users with flexibility in the way a coupled model can be run on the target software and hardware resources with no run-time performance overhead when compared with hand crafted solutions. The approach, which has been implemented in the BFG2 coupling framework generator, allows models to communicate efficiently with each other when they are run concurrently within a coupled model, when they are run in-sequence within a coupled model, or when they are run in some combination. In all of these cases there is no change required to the model code itself. Whilst there are existing coupling systems that can produce efficient concurrent coupled models, they are not able to also produce efficient in-sequence coupled models. To demonstrate the ability of our approach to produce efficient in-sequence coupled models, results for the GENIE Earth System model are presented; these results demonstrate that the resulting coupled model is as efficient as the existing handcrafted solution and further that this solution is more efficient than the standard concurrent approach.
1. Introduction
Coupled model simulations are built from a collection of separate intercommunicating single models. Coupled models are used extensively in Earth System Modelling (ESM) and increasingly in Numerical Weather Prediction (NWP) - where single models simulate systems such as a t m e sphere, ocean, etc. In the ESM and NWP domain the simulations typically iterate forward in time from some initial conditions. The approach described in this paper produces efficient coupled models within which models may run concurrently, in-sequence, or in any combination of the twoa. 'This work was primarily funded by NERC as part of the GENIEfy project and is strongly influenced by a Met Office funded FLUME consultancy. aScientists need to take due care that when exploiting concurrency the results computed
178
179
We define a model to be concurrent with respect to another model when the dependencies between models within a coupled model composition allow them to be run with at least some overlap. To exploit this concurrency, in order to reduce execution time these models can be deployed with separate sets of ‘threads’. Note, the term thread is used here in a generic way, simply meaning a separate flow of control of execution; the intent is to be agnostic on the actual way in which a thread is implemented, for example whether as MP16 ‘processes’ or POSIX2 ‘threads’. Please also note that the phrase ‘sets of threads’ has been used since individual models may themselves exploit parallelism, in which case a ‘set’ of threads is used to compute a particular model. We define a model to be in-sequence with another model when the dependencies between these models in a coupled model composition determine that they must be computed one after the other. In this case the most efficient use of resources is to deploy the coupled model so that the same set of threads is used to compute each model in turn. While it is natural to consider deploying concurrent models with separate sets of threads each, it may also be possible to deploy them using a single set of threads (used for each model in turn), although clearly the concurrency will not be exploited in this case. Similarly, while it is natural to consider deploying in-sequence models with the same set of threads it is possible to deploy them with separate sets of threads (in a distributed computing fashion), however this will result in each set of threads being idle for sometime as only one model is executing at any time. As an illustration of the above definitions and concepts, Figure 1 depicts two iterative models, an atmosphere model A and an ocean model 0,which together form a coupled model. For simplicity, assume that these models run at the same rate (that is, they are called from a control loop at the same rate) and communicate with each other every time they are called, i.e. on each timestep. Figure la shows these two models in a composition where they run in-sequence with respect to each other and are sharing the same set of threads. Figure l b shows these two models in the same composition as Figure l a but with each model using a different set of threads; notice these threads are idle for a significant amount of time. Figure l c shows the atmosphere and ocean models in a different composition where they run concurrently with respect to each other and are using a different set of threads. Finally Figure Id shows these two models in the same composiremain valid.
180
An atmos here mode? An ocean
model
Coupling data transfer v'
1
1
1
i
Set of threads
Figure 1. Concurrent and in-sequence models with associated threading
tion as Figure l c but with the models using the same set of threads; the concurrency between these models is therefore lost. Note that the schedule (model calling order) presented in Figure Id is one of a number of valid schedules that could be chosen. For models that are deployed concurrently (i.e with separate sets of threads) within a coupled model each model (or some appropriate data layer) must maintains a separate state space for the data that the models communicate (the coupling data). This separate state is necessary t o avoid the problem of one model overwriting data that another model requires. Data can be transferred between these separate state spaces via appropriate and timely copying. The requirement for data copying between separate state spaces naturally and efficiently maps t o the use of message passing t o communicate between models. A number of successful coupling systems, including the one described here, have been implemented using the message passing approach. To support this approach such systems define a model api which uses in-place calls similar to MPI's mpi-send and mpiiecv. We use the term in-place to describe this form of communications api, as,within a model code, data can be provided to other models from the point where it is created and data can be requested from other models where it is needed. The explicit message passing approach can also be used for models that are deployed in-sequence. In fact, this approach represents the current state-of-the-art for ESM coupling systems that support both concurrent models and in-sequence models (and any combination of the two) within a coupled model deployment.
181
However, it is possible to implement a more efficient (in both execution time and memory use) solution to the one described above when models are run in sequence. When models are run in-sequence using the same thread, or set of threads, the most efficient way to communicate is to share the same state. Sharing state is possible as only one model is active at any one time and because the same threads are used to compute each of the models. A natural way to implement this shared state solution in a procedural language is to communicate via argument passing. This approach is typically used in hand-crafted coupling solutions, for example, within the GENIE framework5 and within the Met Office’s Unified Model (UM)l (for their physics routines for example). The benefit of sharing state when compared to using separate state (the concurrent solution) is twofold (1) there is a reduction in memory requirements as only one copy of any coupling data is required. (2) there is no need for any data copying. In the concurrent approach the data needs to be transferred between the separate instances of the coupling data state spaces.
PROGRAM COUPLED DECLARE A and B INITIALISE A and B FOR EACH model timestep CALL ATMOS(A,B) CALL OCEAN (A, B) END FOR END PROGRAM COUPLED Figure 2.
Pseudocode structure of hand-crafted in-sequence solution
Figure 2 illustrates the structure of the GENIE and UM hand-crafted in-sequence solutions. As in the previous example we have an atmosphere model and an ocean model and for simplicity we assume that the models are called at the same rate and communicate on each iteration. To further simplify matters we assume that the atmosphere and ocean models are invoked via a single subroutine call (ATMOS and OCEAN respectively). However, in general there may be a number of subroutine calls associated with a model. The figure presents what is often termed the control layer of
182
the code, as this layer controls how the models are called and interact with each other. This layer is responsible for the following tasks: (1) the declaration of the shared coupling data (in contrast to the concurrent approach where the models themselves declare their own coupling data). (2) the appropriate initialisation of the shared data. This will typically be achieved by reading the data from a file. (3) the calling schedule for the models - for iterative models this is typically one or more loops with an appropriate ordering of the models. (4) the appropriate coupling of data via arguments. In our example, on the very first iteration the atmosphere receives the initial values of A and B. In all other cases the atmosphere model receives any changes to A and B made by the ocean model and vice versa.
There is, therefore, currently a gap between the performance that a state-of-the-art coupling system can produce and what can be implemented by hand. In the remainder of this paper we describe and demonstrate how our coupling approach addresses this gap in performance. Section 2 introduces our approach, Section 3 introduces and presents results for an implementation of our approach using the GENIE ESM. Finally, Section 4 presents our conclusions. 2. FCA and BFG2
Manchester have developed the Flexible Coupling Approach (FCA)3 to enable the flexible composition and deployment of coupled applications. The approach requires the capturing of a description of component (i.e. single model) interfaces and provides a dataflow-based Composition mechanism to build applications out of components. The description of interfaces allows composition t o be done at an abstract level, independent of implementation details associated with particular model codes that satisfy the interfaces. FCA separates the concerns of composition from deployment issues such as how t o configure components to executables and where to deploy the resulting executables. Associated with each phase of this Description, Composition and Deployment (DCD) approach is metadata which captures relevant information from each phase. A prototype FCA implementation called the Bespokeb Framework GenbApparently the word ‘bespoke’ is not well known outside of the U.K. It stands for
183
erator (BFG) has been developed. The BFG uses XML’ t o capture the metadata and XSLTg t o process the XML in order t o produce framework code, basically control code and communication code, in which the components can execute The BFG specifies single model coding rules t o which a conformant model implementation must adhere; it also defines XML schema7 to capture metadata describing conformant models, their scientific composition (basically, the specification of the exchange of physical coupling data) and their deployment onto resources. The BFG engine processes the resultant (user specified) XML, producing appropriate ‘wrapper code’ within which the models can execute. A primary benefit of the BFG is that it isolates the science that a model performs (effectively defining a scientific api) from the code used t o control and couple it with other models. This separation is useful in itself as it promotes the idea of scientists concentrating on science rather than computer science. However, when combined with the metadata description, it also allows the flexible deployment of the coupled model onto a number of different software and hardware resources (targets) with no change t o the model code or the scientific composition. The latest version of BFG is version 2. Version 2 is a significant advance on BFG14. BFG2 currently supports BFG1-compliant models (with MPI as the target) and, in addition, supports models with multiple entry points (e.g. for initialisation, run and finalise routines), the initialisation of coupling data and coupling via argument passing. 2.1. Argument Passing in BFG2
BFG2 supports models that are written t o couple via argument passing, in-place (put- and get-style) calls and any combination of the two. The reasons for supporting both in-place and argument passing as a way of inputting or outputting coupling data are:
(1) existing models are already written in both styles so supporting both minimises the code changes required when using the BFG2 system. This is borne out by the fact that GENIE scientific models have required no changes to their code in order to be used in the BFG2 system (essentially, the GENIE hand-generated control layer is replaced by a BFG-generated version). ‘one-off’, or ‘tailored’.
184
(2) it is sometimes more appropriate t o use one or the other form. For
example, for data has been computed as a diagnostic in a temporary array in a deeply nested subroutine it is much simpler (and more efficient in terms of memory use) to output this data from in-place calls rather than having to add it as an argument and pass it all the way up t o the ‘top’ layer. Given metadata descriptions for a coupled model the BFG2 system can generate bespoke coupled control and communications code for the coupled model. When models are specified to run in-sequence any data between the models that is made available as an argument will be automatically implemented in the control code as shared state that has the same structure as hand crafted solutions (through the generation of data declarations and possibly storage allocation etc.). Therefore, BFG2 can automatically create the same argument passing coupling solutions that are currently manually implemented. When coupling data that is made available via argument passing is specified for models running concurrently, or when one models coupling data, made available as an argument, is connected to data from another model that is made available via an in-place call, then BFG2 automatically adds an appropriate in-place wrapper around the argument passing routine (requiring no change to the model code itself), effectively turning it into an in-place call. These calls are then connected using MPI communication calls. As an example, Figure 3 presents the general structure of the control code that would be generated for the example presented in Figure 2 when the two models have been specified to run with separate sets of threads (a set of threads is a sequence unit in BFG2 terminology) and in different main programs (BFG2 deployment units). Notice that the two models are called in the same way as in the original sequential example (i.e. using argument passing), thus the model code does not need to change. However, in-place (put/get) calls have been generated around the models by BFG2. A communication layer is also generated by BFG2 and this ensures that data is routed appropriately between the in-place (put and get) calls. The communication calls that BFG2 generates are currently limited to MPI call$. Thus, BFG2 can couple argument passing data with other argument passing data or with in-place calls. The models can use the same threads CBFGl supports a range of communication fabrics, including MPI and these will be integrated into BFGP in the future.
185
PROGRAM COUPLED1 DECLARE A and B INITIALISE A and B FOR EACH model timestep CALL GET(A) CALL GET(B) CALL ATMOS (A, B) CALL PUT(A) CALL PUT(B) END FOR END PROGRAM COUPLED1 Figure 3.
PROGRAM COUPLED2 DECLARE A and B FOR EACH model timestep CALL GET(A) CALL GET(B) CALL OCEAN(A,B) CALL PUT(A) CALL PUT(B) END FOR END PROGRAM COUPLED2
Pseudocode structure of a multi-threaded solution
or different threads. The user has the flexibility to decide what is most appropriate. When all the data is implemented as argument passing data and the models share the same threads, BFG2 will produce the equivalent of a hand-crafted7 in-sequence solution. When models are allocated separate threads, BFG2 will produce appropriate MPI calls. One way to view the use of models coded to use argument passing for communication is that they represent only the core scientific computation; an appropriate communication layer being generated by BFG2 which is outside (i.e. which ‘wraps’) this code. 3. GENIE Example
This section introduces the scientific motivation for GENIE, their current software implementation, the current status of using BFG2 to couple GENIE component models and finally it presents performance comparisons between the existing handwritten GENIE coupled model infrastructure and the BFG2 generated equivalent. 3.1. G E N I E overview
The objectives of GENIE are to build a model of the complete Earth system which is capable of numerous long-term (multi-millennial) simulations, using components which are traceable to state-of-the-art models, are scalable (so that high resolution versions can be compared with the best available, and there is no barrier to progressive increases in spatial resolution as computer power permits), and modular (so that existing models can
186
be replaced by alternatives in future). A realistic ESM for the purpose of multi-millennia1 simulations must include models of the atmosphere, ocean, sea-ice, marine sediments, land surface, vegetation and soil, ice sheets and the energy, bio-geochemical and hydrological cycling within and between components.
3.2. GENIE Software Infrastructure
The GENIE model components are subroutines called from a single Fortran 90 main program. The model components typically have separate subroutines that perform initialisation, time-stepping calculations and finalisation. A component model may consist of more than one time-stepping subroutine, but usually has only one initialisation and one finalisation subroutine. The main program calls initialisation subroutines once, at the start of a simulation, time-stepping subroutines repeatedly within a global loop (and some within nested loops), and then finalise subroutines once, at the end of the simulation. A particular GENIE configuration is selected by users setting logical flags(variab1es) a t compile-time that include/exclude model components at run-time. Model resolution (i.e. the resolution of the discrete grids over which the models compute) is selected by setting #define preprocessor macros at compile-time. The compile-time variables are set by users via a scripting interface and are used by the script to create a run environment and build an executable. The final action of the script is to run the executable. Components communicate via argument-passing, and all variables used as arguments are declared at the main code level, in a separate module called genie-global. The main program also includes some transformation calculations, which perform, for example, interpolation and copying of these variables. In this GENIE framework code structure, argument-passing variables and component codes that will not be used in a particular coupled configuration are compiled and linked into the binary executable for the configuration. Hence, GENIE executables currently contain ‘dead code’. Component models also have a fixed position within the main code. This reduces the flexibility for scientists to experiment with different model orderings, which is especially important during the integration of new component models, and which can have an effect on scientific results. There is also a lack of flexibility in the deployment options available at present due to the use of
187
a single main program. There may, in future, be a need to run particular models on different resources (in a distributed or concurrent fashion), for example, and this would require the framework code to be re-programmed manually. The lack of flexibility highlighted above may be attributed to the fact that all knowledge of the composition choices made when using the present framework (e.g. which components share which arguments, and why subroutines are called in the order they are) is only captured by the code itself; there is no higher level representation of the choices that would enable them to be changed without manual re-programming of the infrastructure code. 3.3. GENIE and BFG2
To demonstrate the ability of BFG2 to generate coupled models with alternative model components the following models have been made BFG2 compliant : 0 0 0
0
igcm atmosphere model (shorthand name, ig) slab ocean (shorthand name for all slab components, sl) fixed ocean (shorthand name for all fixed (file reading) components, fi) slab sea-ice (shorthand name for all slab components, s1)
In all cases the component model codes needed n o modification as the multiple subroutine structure of the GENIE components and argument passing mechanism to exchange coupling data is supported within BFG2. Only BFG2-compliant metadata to describe the models, their composition into coupled models and the deployment choices needed to be generated. By far the most time consuming task in this process was the extraction of the composition metadata (i.e. the connectivity between models and their initialisation). The above models (plus a number of transformations embedded within the GENIE control code which also needed to be made BFG2 compliant) allow two existing ‘standard’ GENIE configurations to be generated using BFG2, (and these produce the same results as the original): (1) ig5-sl - this is the GENIE terminology for IGCM Atmosphere, Fixed Ocean and Slab Sea-ice. (2) i g s l s l - this is the GENIE terminology for IGCM Atmosphere, Slab Ocean and Slab Sea-ice.
188 3.4. Performance Results
igfisl igslsl
original 57m59s 58m34s Figure 4.
1DUlSU 57m36s 59m3s
1DU2SU 65m2s 64m12s
2DU2SU 63m57s 64m2s
GENIE timing results
This section presents the timing results of the two standard GENIE configurations described in the previous section and a comparison of the performance of the original, hand-crafted, GENIE implementations with those obtained using BFGZ. The results are presented in Figure 4 for four different strategies. The first strategy, original in Figure 4, presents the elapsed time when running the original, unmodified, GENIE code. The second to fourth strategies all use BFG2 t o generate the appropriate control and communications code. The second strategy, IDUlSU uses BFG2 to generate in-sequence code in a similar style to the original hand-crafted GENIE version. The third strategy, lDU2SU, uses BFG2 to run the igcm atmosphere model with one thread and the remaining models with another threadd (in an SPMD-style distributed manner). In this case, communication between the atmosphere model and the remaining models is implemented using MPI but communication between the other models is still performed by argument passing. Note that, because there has been no change to the composition, all the models still execute in-sequence. The data dependencies of the original sequential implementation are respected by this distributed deployment. The fourth and final strategy, 2DU2SUis the same as the third strategy except that the igcm atmosphere is created within a separate ‘main program’ to the other models (in an MPMD-style). The three different BFG2-generated versions produce the same results as the original hand crafted solution and, whilst producing very different ‘wrapper’ code t o each other, they are produced as a result of only minor changes t o the BFG2 deployment metadata. The terminology to describe the three BFG2 solutions is as follows. DU stands for deployment unit and is the equivalent of a ‘main program’. SU stands for sequence unit and describes a set of models which share the same set of threads. Currently, dRemember that the term thread is used in a generic sense as described in Section 1.
189
in the GENIE framework, all GENIE models are unparallelised; therefore, an SU will have only a single thread associated with it. Thus, lDUlSU means that we are generating one main program (DU) and that there is one sequence unit (SU), whereas 2DU2SU means that there are two main programs and a total of 2 sequence units. All tests were performed on a machine with a dual core 3.2GHz Pentium processor, 1 GB main memory and 1 MB L2 cache. The results show that, compared t o the original GENIE framework (original), there is no loss of performance for BFG2-generated sequential code ( I D U I S U ) .In fact BFG2 is 0.7% faster in one case and 0.8% slower in the other. This varience has not been examined in detail but, as the generated code has essentially the same structure as the hand crafted solution, it is suggested that this is a compiler optimisation issue. Notice that the multi-threaded distributed deployment options (lDU2SU and 2DU2SU) do not produce a performance increase, but rather a performance loss. This is because the communication constraints (data dependencies) in the original sequential main code are maintained in the two configurations considered here; thus, the coupled model is inherently sequential. The reason for running and presenting performance results for the two BFG2-generated distributed options (1DU2SU and 2DU2SU) is to examine the overheads of running an inherently sequential model composition in a distributed manner. The performance of such a distributed solution is of interest because this is the only solution currently offered by other coupling systems (which are able to support both concurrent and in-sequence coupled models). These results, therefore, offer an insight into the performance overhead incurred by existing systems and, by inference, the performance benefit of using BFG2 in their stead. The BFG2 generated distributed solutions provide a good lower bound indication of the performance overhead of using existing coupling solutions. This is a lower bound as, in existing coupling solutions, each model has to be ‘framework enabled’ individually. Thus, using these systems, by default, all models execute in a distributed manner with each model communicating via MPI or an equivalent messaging layer. In contrast the BFGZgenerated solutions presented here run only a single model in a distributed manner - the remainder being in-sequence and communicating, efficiently, via argument passing. Further, only minor changes of deployment metadata are required t o run in any distributed configuration of models. The results also provide a good lower bound since both the in-sequence and the distributed
190
solutions are being run on a dual core machine and the distributed solution does not incur any context switching overheads, as it is able to use both cores. The results show that the distributed solutions are between 8% and 13% slower than the BFG2 generated (and also the original GENIE hand-crafted) in-sequence solutions. 4. Conclusions
This paper has presented a novel coupling strategy. The approach, which has been implemented in the BFG2 coupling framework generator, allows models to communicate efficiently with each other when they are run concurrently within a coupled model, when they are run in-sequence within a coupled model, or when they are run in some combination. Performance results for the GENIE Earth System model have been presented which demonstrate that the resulting coupled model is as efficient as the existing GENIE hand-crafted solution. Further, this solution is more efficient than the standard concurrent approach. References 1. R. s. Bell and A. Dickinson. The meteorological office unified model for data assimilation, climate modelling and nwp and its implementation on a cray ymp. In Proc. 4th Workshop on the Use of Parallel Processors in Meteorology. World Scientific, 1990. 2. David R. Butenhof. Program.ming with POSIX Threads. Addison-Wesley, 1997. 3. Fca. http://www.cs.manchester.ac.uk/cnc/projects/fca. 4. R.W. Ford, G.D. Riley, M.K. Bane, C.W. Armstrong, and T.L. Freeman. Gcf: A general coupling framework. Concurrency and Computation: Practice and Experience, 18(2):163-181, 2006. 5. Genie. htt p: //www.genie.ac.uk. 6. Mpi. http://www.mpi-forum.org. 7. Xml schema. http://www.w3.org/XML/Schema. 8. Xml. http://www.w3.org/XML. 9. Xslt. http://www.w3.org/TR/xslt.
TOOLS, TRENDS AND TECHNIQUES FOR DEVELOPERS OF SCIENTIFIC SOFTWARE THOMAS CLUNE NASA Goddard Space Flight Center, Greenbelt, M D 20771 E-mail: Thomas. L.
[email protected]
BRICE WOMACK Northrop- Grumman and NASA GSFC, Greenbelt, M D 20771 E-mail: Brice. T .
[email protected] Recent advances from the broader software development community have significant relevance to the future development of scientific software. The key to the success and acceptance of these techniques lies with a new generation of powerful tools to assist the developer. We will identify many of the issues faced by the scientific software community and the unique aspects required for these new tools to effectively contribut,e t o the continued development and maintenance of scientific models. We will discuss our contributions to this area with specific attention to high performance computing including parallel processing and support for the Fortran language. Keywords: software productivity, Fortran, software testing
1. Introduction
In an earlier era, engineering and scientific modeling dominated the development of software applications, computing architectures, and even programming languages. That close connection between scientific research and the computing industry ensured rapid dissemination of useful information and methodology within the scientific modeling community. However, for somc timc now, commercial and consumer interests have dominated most aspects of the computing industry and, unsurprisingly, there has been a corresponding decline in the relevance of scientific modeling. To be certain, this gulf is not universal. Engineering and scientific challenges continue to drive the leading edge of high-end computing (HEC) including the develop-
191
192
ment of paradigms for parallel computing and new language features. Even here, though, most HEC vendors now build architectures from commodity processors and even processors designed primarily for home game systems. Alternatively, the development of novel programming paradigms such as object-orientation and associated popular modern languages such as Java and Python have not been completely ignored by the technical community, but the vast majority of scientific computing resources, especially within the Earth system modeling community, are still consumed by software written in some variant of Fortran or C. Our emphasis is not merely that commercial concerns have displaced scientific concerns in the evolution of computing capabilities but that the commercial investments dwarf current and earlier investments on behalf of science and engineering. Therefore, our community should not be surprised to learn that valuable new approaches to software development are radically altering the manner in which high-quality software is being developed and maintained. Further, the larger economic value drives the creation of high-quality, powerful tools to aid software developers in applying the new methodologies. In fact some of the techniques now being employed would not be considered particularly novel in other engineering disciplines, but were merely impractical in the more nimble context of software engineering in the absence of appropriate tools. In earlier decades, scientific software tended to be developed by isolated individuals or extremely small teams of researchers focusing on relatively narrow aspects of some physical domain. Increasingly however, scientific models are developed by large distributed teams of researchers that couple numerous physical subsystems into large monolithic software applications. Software complexity is increasing on multiple fronts due to parallelization, portability, increased resolution, and enhanced fidelity. Because of these many pressures, competitive research teams must exploit the full array of techniques and tools to improve their software productivity. We strongly believe that many of the innovations from the software community have a high degree applicability to large scientific applications. In order to address some specific aspects of high-performance computing, these tools and techniques will require some tailoring. Examples include support for numerical issues, parallelization, and the Fortran programming language itself. 2. Trends in the software industry
This section presents a set of interrelated technology developments that are of direct relevance to the scientific computing community. We begin with
193
a discussion of agile development processes which emphasize lightweight processes and nimble adaptation to changing requirements. Next, we discuss some important capabilities of a new generation of of integrated development environments (IDE’s) with special emphasis on high-level automated source-code transformation features. Both agile software development methods and automated transformation place significant emphasis on software testing, and the remainder of this section discusses a new generation of tools to minimize the burden of development and maintaining thorough tests as well as a powerful new development paradigm that becomes feasible as a consequence of the availability of such tools. 2.1. Agile Processes
In the 1990’s the Capability Maturity Model (CMM) [l] was the gold standard for software development best practices. CMM and other similar methodologies were designed to address the need for effectively managing large complex development projects to completion. Traditional software development paradigms like CMM reduced the risk of failure for a software development project by focusing on rigorous process; emphasizing upfront design, analysis and documentation as well as extensive testing and maintenance phases following the actual implementation. Although those approaches had many merits, there was a growing recognition by many that CMM and similar methodologies were not always cost effective or appropriate for some projects. For example as Mark Paulke [2] points out, “Even though agile methodologies may be compatible in principle with the discipline of models such as the CMM, the implementation of those methodologies must be aligned with the spirit of the agile philosophy and with the needs and interests of the customer and other stakeholders. Aligning the two in a government-contracting environment may be an insurmountable challenge.” For many project managers, there is a perception that rigorous CMM like processes can be too difficult or too costly to use in practice for smaller, less complex projects, and they may not nimble enough to effectively be applied on projects with rapidly changing requirements. An appropriate example is [3], ‘‘ Size and technical complexity is another driving factor. The deployment costs are a particularly important attribute here. How complex is the system into which the software will be integrated? Is new hardware being developed too? A spacecraft, to cite an extreme example, has tremendous deployment costs. You certainly want to make sure you get what you deploy right the first time, and so the iterative nature of an agile technique would be difficult to apply. On the other
194
hand, it is inexpensive t o deploy an updated Web application, and that is why agile approaches often thrive in this arena.” Further, despite significant expertise and effort applied in the beginning phases of these rigorous development projects to fully capture the system requirements, correct software interfaces and bits of functionality, necessary changes and defects tended t o be discovered late in the development cycle which both increased the cost to correct them and, perhaps more importantly, increased the uncertainty in the delivery schedule. In 2001, agile processes (e.g. “Extreme Programming” [4])began t o be discussed as a viable alternative development methodology. In contrast t o older approaches, these agile processes directly embrace changing requirements. They emphasize rapid, iterative development cycles - typically on the order of 1-2 weeks per cycle. Due to a variety of causes, software projects are exposed to a much higher degree of changes to requirements than comparable sized projects in other disciplines. Within agile processes priorities and requirements are re-evaluated at each cycle reducing (but by-no-means eliminating) the level of wasted development due to changing requirements. A variety of agile processes have been proposed and are in use in the industry today, but certain aspects are common to most. There is danger in extracting individual aspects of a development process as there is some synergism involved so we shall limit our discussion to a small set that are considered t o be sufficiently independent. Paired-programming is the practice of using two side-by-side programmers t o develop software. Typically one partner is at the keyboard, while the other one “drives” - explaining what should be entered. Although naively this practice would seem to double the cost of development, experience in fact shows major reduction in costs [ 5 ] .Part of this greater efficiency derives from the reduction in the number of defects that are generated, as fellow programmers tend to catch different sorts of bugs than compilers and other tools. Further, the requirement to explain/defend a givcn implementation tends to produce higher-quality designs from the start. In essence the pair perform a continuous design and code review that tends t o catch many issues as they are introduced. A more subtle aspect of paired-programming for large teams is the duplication of knowledge and expertise in the product. A lone programmer can become a development bottleneck in more traditional approaches. It is worth noting that many scientific programmers appear to have more-or-less independently discovered paired-programming, though perhaps not recognizing all of their own motivations for doing so. While both traditional and agile development paradigms placed great
195
emphasis on software testing, the manner of testing is quite different between the two. Traditional approaches used separate teams for implementation and testing, with several unfortunate consequences. One problem was the tendency of developers to deny their own responsibility for defects and often resenting the testers for discovering problems. Worse, though, was the natural tendency for testing to be squeezed between delays in implementation and hardened shipping dates. Time for testing the product was often much lower than called for in the original plan. In contrast, agile processes place responsibility for much (but typically not all) testing on the actual developers and include testing within each iterative cycle. Advantages of this approach include the fact that the developer is familiar with the implementation and therefore in a better position to implement a quality test. (Of course the corresponding disadvantage is that the developer may be blind to other uses of his module.) The other advantage is that the testing schedule is less likely to be squeezed into the end of the whole project. Further, because most testing occurs relatively early in the overall project fewer defects are discovered late in the schedule. Some agile methods go even further, and require developers to create tests before implementing a module. We will return to this in a later section. Developers tend to begrudge the development delays implied by writing tests (despite the clear evidence of greater long-term efficiency), so the existence of appropriate tools and methods to reduce the burden of writing and maintaining tests is vital for agile methods to be rigorously followed. In order to be applied efficiently, code developers need tools that support common code transformations appropriate for the development environment. They require tools to automate the process of generating and routinely rerunning verification tests. Agile developers have increasingly turned to IDE’s and Open-Source tools to help in this area. 2.2. IDE’s and Open-Soume
Integrated Development Environments (IDE’s) are software environments which enhance developer productivity by organizing a variety of tools in a coherent fashion. A typical modern IDE provides a graphical user interface which coordinates actions between an editor, a code browser, a parser, a debugger, a build system, an interface to a software versioning system and potentially other related tools. For example, the editor in an IDE can identify some types of errors immediately by continuously checking the code against the language parsers syntax rules. Similarly, the editor may provide powerful shortcuts when only a small number of potential completions of
196
a n expression or line are allowed. By including the build and debugger the programmer is spared the tedious process of bouncing between tools and is encouraged t o maintain a single stream of thought. Although the concept of an IDE is not particularly new, the existence of high quality open-source IDE’s is a new phenomenon which has considerably increased the potential audience. Eclipse [6] is a very powerful open source IDE framework that encourages the inclusion of additional “plugins” that extend the IDE capabilities. As we discuss below, some of the plugins that have been added provide capabilities for powerful, high-level manipulation of the source code. Recently, Eclipse has been extended t o include support for Fortran via the Photran [7] plugin which includes a Fortran editor and parser. Fortran programmers can also take advantage of many other Eclipse features including interfacing to the debugger and versioning software.
2.3. Refactoring Tools Today’s legacy software systems have accumulated a significant amount of valuable functionality that would be difficult or impossible to reproduce without major investment. Because of this inherent value, the best course forward for many organizations is t o continue t o maintain and extend legacy systems rather than developing from scratch. A useful analogy for discussing the impact of entropy growth in legacy software is a comparison between financial debt and “code debt” IS]. As developers make trade-offs between expediency and long term maintenance, their software accrues debt that should eventually be paid back by improving the implementation. Just as with financial debt, code debt accrues interest in the form of increased effort required t o implement further extensions to the software. Without active management, code debt can grow to the point that it consumes virtually all development resources and can effectively prevent successful implementation of valuable new features. Alternatively, eliminating debt altogether, whether financial or code, is an unrealistic ambition that skews real-world priorities. Rather, one should develop a strategy to manage debt and techniques to reduce debt when it becomes a significant factor. “Refactoring” is a term from the software engineering community which refers to the process of intentionally modifying/improving an implementation without altering the behavior of the application [S]. The key technique for successful refactoring is t o proceed in small incremental steps and continuously werih that the application behavior has not been impacted. Large changes bring a very high risk that a defect will be introduced while also
197
leaving a large number of locations where the defect might be located. Verification is central to successful refactoring, as even highly experienced programmers cannot always see the consequences of seemingly trivial changes to an implementation. This is especially true with Fortran legacy software which may rely on numerous implicit assumptions about the internal implementation of a software module. To be minimally useful for refactoring, verification must be reproducible and easily performed. In recent years, a number of powerful tools have appeared which aid in refactoring and significantly reduce the effort required for even rather substantial software transformations. Unfortunately these efforts are largely focused on business software and the programming languages relevant to that domain. JAVA in particular has greatly benefited by developments including the JUnit unit testing framework [9] and JAVA refactoring tools built into Eclipse [6], a popular open-source integrated development environment. Unfortunately, but somewhat understandably, the emergence of similar tools for Fortran and parallel computing have lagged significantly, but hopeful prospects are beginning to appear. Several unit testing frameworks intended for Fortran applications have been introduced and will be discussed in detail in a later section. Another example is the Photran package discussed above, which also includes rudimentary refactoring capabilities for Fortran. Verifying the accuracy of refactorings for legacy code is extremely difficult [10,11]for a number of reasons. First, because legacy software was not typically developed with testing in mind, developers must make at least a few small changes to the software just to be able to introduce tests for software subcomponents, while ideally the tests would be in place prior to making any changes. Great care must therefore be taken in these preliminary steps. Test coverage is another obstacle that is particularly problematic for legacy software. Legacy applications often have a very large number of configurations and logical execution paths, which can result in an exponential number of test cases to consider. In practice a compromise must be made between good coverage and progress. Another difficulty for legacy software which is particularly true of scientific/numerical models is that there is often no definition of correct behavior for subcomponents. Rather, the implementation effectively defines the expected behavior, and would-be testers are must resort to merely capturing and comparing snapshots of the state of the subcomponent before and after execution.
198 2.4. Testing
As discussed earlier, agile software practices strongly emphasize routine methodical testing. Such testing improves reliability and quality of software and reduces costs by detecting defects early in the development process. Tests can be applied at various levels ranging from unit tests which typically verify just one simple aspect of a method (procedure), to integration tests which verify that subsystems behave correctly when assembled into more complex systems, to validation testing which verifies software behavior against high level requirements. Although an individual test may be something as simple as a lone print statement placed in source code t o verify a particular value, the utility of a test is much greater if it can be reproduced on a regular basis. For instance, value-testing print statements are typically removed once the software “works”, but may need to be reinserted when new defects are suspected later on. Reproducible tests, on the other hand, can be maintained externally to the software being tested, and can serve as a permanent check on particular expectations of behavior. (We recognize that the tests themselves must evolve as expectations change, hut that is a very different aspect of software development .) When the behavior of a system is well-covered by a variety of reproducible tests, the collection of tests can be referred to as a regression test suite. Just as a harness allows a trapeze artist to attempt new feats with some degree of security, a full suite of software regression tests allow developers to attempt major software extensions or transformations without incurring undue risks of breaking existing behavior. Such regression tests immediately identify when and where defects are introduced as software development progresses. The development and maintenance of these supporting tests introduce a new overhead to the cost of developing software, but with appropriate tools, this cost can be significantly outweighed by the increase in developer productivity arid the overall reduction in time required t o develop a software system.
2.4.1. Test Driven Development As the name suggests, test driven development (TDD) is a software development method wherein developers first create tests which are subsequently used t o drive the development of the actual implementation of a given software unit. More precisely, the development proceeds in a series of very short cycles (ideally about 10 minutes or less) characterized by the
199
following steps. (1) Implement a short test which specifies a desired behavior by failing (signalling an exception) when that desired behavior does not occur. (2) Verify that the test fails, because the functionality has not yet been added. For compiled languages the failure is typically blatant due to missing interfaces which abort the compilation. This failure is very important, as it indicates the test is actually being exercised. (3) Implement the simplest modification that will result in a passing test. Before gaining proficiency in this method, developers will tend to slip back into old habits and implement beyond what is strictly checked by the test. (4)Eliminate redundancy. At this stage, the developer ensures that the tests are robust in the sense that they do not, depend on details of the implementation.
There are numerous obvious advantages to TDD. First and foremost, this approach imposes discipline which ensures that tests are actually created. As discussed earlier, traditional testing is often deferred and then squeezed out of the development schedule as the product nears delivery. Even when testing survives scheduling, testers have a natural tendency to be less than thorough. Putting the tests in front, ensures that all portions of delivered code are tested. Another interesting advantage is that the software is always functional. Desired features might not yet be implemented, but the system can be demonstrated and deployed on demand. This aspect eliminates the common problem of lost productivity due to freezes prior to releases, during which development halts while defects are eliminated so that the product will be functional. As with paired-programming, one might expect that despite the benefits touted above [5], TDD increases costs due to the development of additional source code for tests, typically estimated to be 1-2x that of the actual implementation. In practice, however, a number of relatively subtle aspects of this approach result in significantly improved productivity [12]. First, there is an overwhelming change in the confidence on the part of developers when working with thoroughly tested code. Without TDD, developers all-too-frequently code themselves into blind alleys and must retreat or persevere in denial of the obstacles. Debugging becomes a large component of development. TDD, however, is more methodical and leads to cumulative progress that is difficult to match. Perhaps the most astonishing yet subtle and profound consequence of TDD is that it leads to fundamentally better
200
designs [13]. By forcing decisions about how to verify software to the top of the creative process, more careful consideration is given to the design of interfaces and the assignment of responsibility for a given feature to the most appropriate class/procedure within the system. Better overall design leads to improved productivity by reducing the average size of a change required t o introduce any given feature. 2 . 5 . Testing Fmmeworks
Testing frameworks are intended to significantly reduce the burden of creating, managing, and executing collections of software tests. The additional effort t o perform these activities is the strongest impediment to methodical testing, and testing frameworks are absolutely critical to practical adoption of methodical testing. Frameworks facilitate the creation of tests by providing a set of methods that probe parameter values and raise exceptions when incorrect values are obtained. At execution, the framework collects these exceptions and reports the number of failed tests and associated messages. Less essential, but nonetheless convenient, a good testing framework provides means to assemble suites of tests that can be evaluated independently. In this manner, simple tests can be executed routinely while more exhaustive/expensive tests can be performed on a less frequent basis. Many testing frameworks provide GUI interfaces which provide a very satisfactory feedback mechanism that the tests have passed. Many developers describe an euphoric experience from pressing the run button on the GUI and finding the “green-bar” indicating all tests were successful.
3. Capabilities for Fortran and HPC 3.1. Constraints of Fortran
Standard Fortran 95 (F95) has several limitations which must be overcome to create analogous tools to those from other disciplines. The current Fortran 2003 (F2003) standard eliminates the most severe of these, but most compiler developers have yet to implement the relevant features. Therefore, the tools that are discussed in this section typically rely on non-Fortran capabilities to some degree, and our expectation is that a new generation of more extensible tools will follow in the next few years. The specific limitations of F95 include (1) Limited abstractions (polymorphism). This prevents the testing framework from treating different types of tests as different instances of an abstract test case.
201
(2) Limited extensibility (inheritance). This prevents the developer from expressing his tests as subclasses (specializations) of a generic test class. (3) Lack of reflection - an automatic mechanism for identifying what tests are available for execution. E.g. JUnit is able to dynamically assemble all classes beginning with “test-” as test cases. (4) Scarcity of open-source language parsers. This impacts the ability to develop automated refactorings. F2003 rectifies limitations (1) and (2), while (3) can be largely mitigated by other mechanisms. However item (4) is actually exacerbated because applications will probably begin using F2003 features even before some vendors have released appropriate compilers. Open-source parsers are expected to lag significantly for F2003, but one potential approach would be to develop tools in F2003 t o operate on applications written in older versions of the standard.. 3.2. Fortran Unit- Testing Frameworks
At this time there are at least 3 open-source unit-testing frameworks for Fortran: FRUIT [14], funit [15], and our own pFUnit [16,17]. Of these, FRUIT is by far the simplest in structure and capabilities, but perhaps the most accessible for that. FRUIT provides a robust set of assertions that can be used t o establish a variety of test cases. Funit is a fully capable unittesting framework which leverages Ruby, a powerful and intuitive objectoriented scripting language to overcome inherent limitations within the then current Fortran standard. It should be noted that although funit is based upon Ruby, users need only write their tests in Fortran. The Ruby language does not need to be installed - funit can be downloaded as a self-contained package. Our own pFUnit has similar capabilities to funit in addition to some useful extensions specific to high-performance computing. Further, pFUnit is itself written in standard Fortran95, with the exception of a few small routines in C for manipulating addresses. pFUnit began as a proof-of-concept that the limitations of a non objectoriented programming language need not significantly impede the development of a unit-testing tool analogous to JUnit, but accessible to scientific programmers. After that prototype, pFUnit was rewritten from scratch using test driven development, partly as a learning exercise but also to overcome some unsatisfactory aspects of the initial implementation. Subsequently, pFUNit has been used successfully for several projects within our group and was recently released under a NASA open source agreement.
202
Noteworthy aspects of pFUnit include: 0
0
an extensive assertion library for numerical software. Detailed comparisons are possible between scalars and multi-dimensional arrays of mixed precision. Several colleagues have expressed interest in just using this layer of pFUnit. support for testing routines which use message-passing. pFUnit uses a more complex mechanism to launch such test procedures on a userspecified set of processors and collect exceptions - reporting them on a per-processor basis. Because failure t o complete an operations (e.g. process deadlock) is a more likely error in parallel applications, a timeout option is provided by pFUnit to ensure that remaining tests are executed. A simple extension should allow pFUnit t o handle other messagepassing paradigms, including ESMF [l8] which wraps MPI. No special support is provided for OpenMP, but pFUnit should be able t o work with OpenMP provided all pFUnit calls are outside of parallel sections. (The Exception layer is not thread-safe.) support for parametrized testing. In some instances, developers may wish to exercise an interface under a variety of combinations of arguments. Traditionally this would require a separate test case for each such combination, but we have added a capability to run a ParameterizedTestCase across a set of parameters. Simple extensions could include the ability t o automate construction of subspaces to sample hyper-planes of the n-dimensional parameter space.
3.3. The Fast Fortran Tmnsformation Toolkit Although we have been able to benefit greatly from the application of TDD and pFUnit for the development of new software, we have had limited success in using these directly within large legacy software systems which dominate the workload of our group, the Advanced Software Technology Group(ASTG) at NASA Goddard. Some of the difficulties we have encountered are partly due t o general testing difficulties of legacy software, but are also compounded by several specific aspects of our domain of scientific applications written in Fortran. We work primarily with scientific applications that have been developed over years by a broad range of scientists that understandably have focused on scientific accuracy of the answers and spent very little time working on the overall architecture or design of the applications. After years of development, the scientists are familiar with the current form of the code and are reluctant to allow software engineers
203
make what may be perceived as cosmetic changes that only change the organization of the code and don’t initially add additional functionality. Applying test driven development within a legacy application is difficult for a number of reasons. Partly this is a chicken-or-the-egg issue in that one should introduce a test before altering the application, but the existing interfaces are inadequate t o introduce a useful test. Instead, one is forced t o carefully modify the source to enable testing. A number of techniques t o best make these steps have been discussed. [10,11] A major factor that compounds the general difficulty in working with legacy codes is the lack of a precisely defined behavior t o be used for developing a test. The precise requirements for a subsystem may no longer be available, or not easily expressed in software. Consider for example a complex parametrization such as is typical of column radiation models in climate/weather applications. Because pertinent details of the implementation are often deeply embedded in long impenetrable routines, there is no small set of externally specified run parameters that are likely to capture the full behavior. In these situations, the best that can be done is to sample the subsystem during realistic execution of a model and save the sampled behavior for future comparison. Such comparison is further complicated by the heavy use of floating-point data in technical software which complicates issues related t o round-off due to reordering of operations when refactoring. An analysis of what tolerance t o accept for deviations, if any, is also generally not available. An open area of research is to understand t o what degree TDD can be applied in the development of new parametrized physical models (as opposed to the previous discussion about existing legacy software). In other disciplines, the precise behavior of each interface can usually be described in the form of a set of tests with only a modest investment of effort. In more numerical/physical applications, such precise descriptions may not bc practical or perhaps not even possible. For instance, many models have analytic solutions for isolated parameter values which may serve as unit tests. However, numerous implementations might satisfy those constraints without satisfying the intended behavior at intermediate parameter values. Our strong suspicion, however, is that that with experience and effort this issue is only one of degree and that a potential revolution in the implementation of parametrized subsystems may come about from TDD. Future developers may spend more time understanding and specifying important model details in testing constraints rather than embedding such knowledge in unfathomable logic within the implementation. If so, such models would
204
be expected t o be more robust under many conditions than is true of the current situation where subtle mistakes can exist for years or longer because their affect is small. Another difficulty when working with legacy models that is particularly severe with older Fortran models is the heavy reliance on saved variables and globally accessible variables (i.e. variables accessed through common blocks or the modern equivalent of F90 modules). Such quantities cannot be directly manipulated via the external interfaces of a software subsystem which greatly complicates specifying the state for test execution. We have developed an initial design for a toolkit which leverages new features in F2003 and thereby enables developers t o sample the state of a legacy subsystem before and after execution within the full system. The design also includes a small extension to pFUnit which will allow our unittesting framework to replay a legacy subsystem and identify unintended changes to the behavior of the subsystem. The most difficult aspect t o implement will be parsing the source code to identify information which is not exchanged through a formal interface. Our design allows for this analysis to be performed manually by the developer - an important compromise because such automation will lag development of the other aspects of this system and will undoubtedly need to be manually augmented during the transition to automation. When fully implemented, this toolkit should provide a suitable means t o place legacy applications within a test harness and thereby facilitate purposeful refactoring either through semi-automated tools such as Photran or even manual changes t o source code. Our tentative name for this system is the Fast Fortran Transformation Toolkit (FFTT). In addition to increasing the long-term productivity of software developers, we believe the availability of FFTT to aid legacy scientific development should result in a number of other important positive consequences for scientific modeling. Perhaps most importantly, we expect t o enable new scientific capabilities in some applications that would otherwise he considered prohibitively expensive despite strong desire on the part of scientists to pursue them. 4. Summary
In summary, there has been a significant shift in the definition of best practices for developing software. Agile processes are placing more emphasis on the ability to adapt t o change. Commercial software developers are driving development of new Open-Source tools, IDE’s and techniques to support agile software development. The scientific development commu-
205
nity currently lags behind in adoption of these tools and techniques and has the opportunity to leverage the work already done to significantly improve the maintainability of legacy science applications. Initial investments have been made t o develop Fortran unit testing capabilities and adapt the Eclipse open source IDE for Fortran. In order to capitalize on the full POtential of these new capabilities, additional investment must be made to enable developers to quickly capture behavior of legacy code and convert it to repeatable unit tests. FFTT is our initial design t o provide the ability to quickly capture tests that replicate legacy behavior. In the long term, we believe these new capabilities should improve the overall quality of source code which has the additional benefit of making software more accessible to collaborators and reducing the effort required for integration with other software components. Finally, the heavy emphasis on software testing inherent in the techniques applied will result in software which is substantially more robust and, perhaps more importantly, increase the confidence in the software on the part of developers, researchers, and, indirectly, policy makers. References 1. C. M. U. S. E. Institute, The Capability Maturity Model: Guidelines for Im-
proving the Software Process (Addison-Wesley Professional, 1995). 2. M. C. Paulk, CrossTalk - the Journal of Defense Software Engineering 15(Octtober 2002). 3. D. Kane and S. Ornburn, CrossTalk - the Journal of Defense Software E n gineering 15(0cttober 2002). 4. K. Beck, Extreme programming explained: embrace change (Addison-Wesley, 2000). 5. L. Williams, R. R. Kessler, W. Cunningham and R. JefFries, I E E E Software 17,19(/2000). 6 . M. Robillard, W. Coelho and G. Murphy, I E E E B a n s . Software Eng. 30, 889 (2004). 7. J. Overbey, S. Xanthos, R. Johnson and B. Foote, Refactorings for Fortran and High-Performance Computing, in Second International Workshop o n Software Engineering for High Performance Computing System Applications, (ACM Press, May 2005). Also available at http://www.laputan. org/ pub/papers/icse-paper.pdf. 8. M. Fowler, Refactoring: Improving the Design of Existing Code (AddisonWesley, 2000). 9. E. Gamma and K. Beck, Java Report 4, 27(May 1999). Feath10. M. ers, Working Effectively with Legacy Code http: //www. objectmentor. corn/ resources/articles/WorkingEffectivelyWithLe%gacyCode.pdf.
206 11. M. Feathers, Working Eflectively with Legacy Code (Prentice Hall, 2004). 12. B. George and L. Williams, An initial investigation of test driven development in industry, in SAC '03: Proceedings of the 2003 A C M symposium o n Applaed computing, (ACM Press, New York, NY, USA, 2003). 13. S. Kuppuswami, K. Vivekanandan, P. Ramaswamy and P. Rodrigues, SIGSOFT Softw. Eng. Notes 28, 6 (2003). 14. A. Chen, FORTRAN Unit Test Framework (FRUIT) http: //sourceforge.
net/projects/fortranxunit.
15. M. Hill, B. Kleb, K. Bibb and M. Park, FUnit: A Fortran Unit Testing Framework http: //fu n i t . rubyf orge. org/. 16. T. Clune and B. Womack, pFUnit http: //sourceforge .net/projects/ pfunit, (2006). 17. T.Clune and B. Womack, pFUnit: A Unit-Testing Framework for Parallel Fortran Applications (2007). 18. C. Hill, V. Balaji, M. Suarez and A. da Silva, Computing an Science and Engineering 6 (2004).
ANALYTIC MPI SCALABILITY ON A FLAT SWITCH GEORGE W. VANDENBERGHE' I.M. Systems Group, N O M N C E P , 5200 Auth Road Camp Springs, Maryland 20746 The major methods of integrating the primitive equations of motion have proven to map well to distributed memory computers. Experience shows that both halo exchanges and domain transposes scale well to -1000 MPI tasks. With several doublings of processor count expected in the next five years, new scaling issues not inferable from previous experience may arise. This work presents an analytic model of message passing performance for a large range of task counts on future distributed memory systems. Halo exchange methods are shown to scale to enormous (10**5 or more) processor counts. Domain transposes which are now observed to reach scalability limits on toroidal networks will be shown to reach similar limits even on a flat switched interconnect when task count approaches 10**4.
1. Introduction Numerical Weather Prediction has been a computationally intensive process since it was first attempted in 1950. Since the late 1950s most major national weather centers have procured and used the most advanced supercomputers available at the time to implement continuously improved numerical models and numerical methods of initializing and integrating the governing equations. In the first decade of the 21st century the dominant High Performance Computing architecture has been some form of distributed memory computing cluster consisting of several hundred to a few thousand CPUs communicating through a high speed interconnect. The major methods used to integrate the governing equations have mapped well to this architecture and optimal platforms are currently obtained by minimizing cost per processor and maximizing individual processor performance. The introduction of the distributed memory clusters in the mid to late 1990s created a mild paradigm shift in computing from optimizing a few very
'Corresponding Author address: George W. VandenBerghe, Room 208, Environmental Modeling Center, WWBG, 5200 Auth Road, Camp Springs MD, 20746. E-mail
[email protected]
207
208
powerful and specialized processors with shared memory, to developing algorithms that would distribute subsets of the problem to the many CPUs while localizing most data requests. Although they were formidable, these problems were solved and the major integration methods scaled linearly to the available processor counts within a few years of these clusters’ introduction. Today at NOAA/NCEP scalability of the forecast algorithms is fully adequate (substantial unsolved parallelization problems remain in data assimilation and objective analysis) and optimization focuses on improving per processor performance and reducing or pipelining serial portions of the problems. It is not widely realized that, after the initial large increase in the late 1990s, further increases in processor count have been small, less than a factor of two in the seven years since January 2000. Performance increase of individual processors during this period has exceeded four doublings or a factor of more than 16. This trend is unlikely to persist, as single processor speed improvements become more difficult to achieve. This alone has motivated recent increases in processor count in NCEP’s next platform. However increasing processor count also increases aggregate power consumption. Fortunately large decreases in power consumption can be attained with modest decreases in processor clock rates. This in turn motivates machines with larger numbers of less powerful processors such as the IBM Blue Gene platform and places new emphasis on algorithm scalability. Before these machines are built it is useful to consider whether present methods of integration will scale to them or whether new techniques will have to be developed. Fortunately this problem can be analytically described and investigated. This approach has not been widely attempted but an excellent early exploration of this topic can be found in (1). It is the purpose of this present paper to explore the scalability limits of finite difference and spectral methods of integrating the governing equations at task counts beyond those discussed in that work. Before continuing it is useful to examine the fundamental problems implementing these methods on distributed memory computers. Finite difference methods replace the derivative terms in the primitive equations with finite differences. This causes a problem when the model domain is decomposed into sub domains on different processors because neighboring values are needed to evaluate the finite differences and these are not available at domain boundaries. The most common solution to this problem is to replicate the boundary points on neighboring grids, a process called halo exchange. Spectral methods at first appear less tractable because these require summation across the entire model domain or at least one dimension of it. This implies global access
209
to the full domain on all processors. In practice there are several ways to bypass this requirement. The methods implemented at NCEP maintain the full domain in one direction on all processors with partial domains in the other two, and domain transposes in between calculations that require full access to a different dimension. The communication problem for these cases is to implement efficient scalable transpose algorithms. The architecture examined here is a simplified generalization of current message passing machines. These machines consist of many nodes each of which is connected to all others by a high speed interconnect. Traffic between any two nodes does not interact with any other interconnect traffic so interconnect bandwidth scales linearly with node count. Performance of individual interconnect interfaces (and the nodes they service) is modest and it is the aggregate capability which defines the machines’ power. Interconnect performance is determined by a latency L for small messages and a bandwidth Sr for large messages. These parameters along with memory bandwidth, halo depth and computational intensity, will be varied and examined.
2. Finite Difference Method Finite difference methods represent derivatives with finite differences between function values at known locations on a grid. They require access to neighboring grid points, generally several grid lengths around the point of interest. On a distributed memory machine the model domain is partitioned among the processors in a cellular or tiled fashion such that each processor stores a subset grid containing some of each horizontal dimension and the entire vertical dimension. The derivative calculations become problematic near boundaries of these cells because some neighboring points are not available. This is solved by maintaining copies of the boundaries of each cell on neighboring cells. The finite difference expression or stencil then accesses these values to calculate point derivatives at the boundaries. These copies are called halos or ghost regions. When the grid is updated, new copies of the halo regions must be exchanged with all neighboring regions. Consider a sub domain represented on a tile with length dimension X. The number of tile points is X The number of perimeter points is 4X. Intuitively the computation cost scales with the interior size X2 while the communication cost scales with the perimeter size X and, as the number of processors and thus
’.
210
domain divisions increases, the ratio of perimeter to interior and thus the relative cost of exchanges increases. Two of the boundaries of the tile represent the first and last columns of the array used to store the tile and these can be copied into exchange buffers at full memory speeds. However the two other boundaries represent rows which require a strided memory access to copy into a sequential buffer or for the halo exchange routine to perform the same strided memory access. Thcsc are more expensive. The buffers must then be exchanged at interconnect interface speed, and the new buffers copied back to the halos. The row copies are again expensive while the column copies are optimal. The performance of these operations is defined by a characteristic (for the machine) memory bandwidth, Mr, a characteristic stride penalty factor F, a characteristic exchange latency L, and a characteristic exchange transfer rate Sr. The total time of an exchange is T=(2X + 2FX) /Mr
+ 4NSr
+4X/Sr +(2X +2FX)/Mr +4L*2. +8L +8/Sr
(1)
The first term is the cost of building the buffers to be sent to four neighbors, the second term is the cost of sending the data at rate Sr, the third is the cost of the analogous receive, the fourth is the cost of extracting the new data from the received buffers and the fifth is an additional latency cost (one latency for each buffer passed and there are 4*2 buffers). The sixth and seventh terms appear with any finite difference stencil more complex than a four point stencil and represent the cost of transferring the single boundary point at each corner. If F is a known constant value (for example five) it can be substituted into (1) and the other terms combined to obtain (using F=5) T=12X/Mr + 8X/Sr + I2X/Mr +8L +8L +8/Sr T=24X/Mr +8X/Sr
+ 16L +8/Sr
(2) (3)
The above example assumes a halo depth of 1 requiring only the single boundary rows and columns be exchanged. More advanced finite difference stencils require neighboring points a few grid lengths away. This quantity can Lagrangian and implicit methods require even be defined as Halo Depth H. larger depths. Adding the influence H to ( 3 ) produces T=24XWMr +8XWSr +8H2/Sr + 16L T=H(24X/Mr+8X/Sr +8H/Sr) +16L
211
It is useful to compare ( 5 ) with the compute time between each exchange. This defines the communications:compute ratio for the algorithm. Ideally this should be small and one definition of scalability limit is that number of tasks for which this ratio exceeds some overhead tolerance value such as 1. For a value of 1, communication time and compute time are equal. For the case where a known machine is being evaluated Sr and L are known. If Sr is expressed as a multiple of Mr, then terms 1 and 2 in parentheses can be This simplifies the equations further. For IBM P655 clusters these added. numbers are Sr=Mr/l.3 and L=lOp.ec (a moderate overestimate). The computational work is typically equivalent to some number (ranging from 10-40) of memory copy operations. Define this number as W and the total time to perform a time step becomes Tstep=WX2/Mr + H (34.4X +10.4H)/Mr +16L
(6)
The communication time will be equal to computation time for that X where
WX2/Mr= H (34.4X +I0.4H)/Mr +16L
(7)
or WX2 = H (34.4X +10.4H) +16LMr
This is a quadratic equation. interest and is
X
The larger of the two solutions is the one of
= (34.48+J34.422H2 +4W
*10.4H2 +160Mr)/2W
(9)
On IBM P655 machines Mr is about 250 real4 wordslpsec and latency is about 10psec. For a halo depth of 1 (the simplest kind) and W=10, X is about 65. The scalability is in turn determined by how many grids of this size are required to store a model domain. Obviously small problems are not very scalable while large ones are. Modern grid point models represent the atmospheric state with six or seven state variables plus perhaps a few tracers. Assuming ten total variables
212
represented on fifty levels yields 500 state grids. The typical domain today is -lGword (10**9 words) which breaks down to 500 2Mword grids. Dividing this by 65’ implies the communications:compute scalability limit is reached at about 470 tasks. The dominant term in the communication time above is latency. While machines with smaller latency would reduce this problem, another more fundamental optimization almost eliminates it even for machines with only modestly low latencies such as the P655. That is to consolidate all of the grids’ send and receive buffers together so that instead of 500 latencies, one for each of the 500 buffers, one latency for a single large buffer is incurred. This increases the scalability limit to 75,000 tasks. With a more reasonable halo depth of 4, the scalability limit is still 8500 tasks. It is useful to examine the behavior of (9) (duplicated below) as various parameters vary
X
= (34.48+J34.422H2 +4W *10.4H2 +160Mr)/2W
For the case where buffer build and transfer time dominate, X scales with 1/W and linearly with H. The scalability is determined by the ratio of perimeter to interior points. For latency dominated cases X scales with 1/W”* and the halo depth H is not significant. The problem described is determined mostly by buffer build and transfer time or to state another way, by the geometry of the decomposition. The scalability in turn is defined by the quotient of grid size/X2and thus increases linearly with problem size and with the square of the interior work but decreases with the square of the halo depth. This is not as alarming as first inferred for deep halos because they are associated with larger stencils and thus more interior work (but of course better approximation of point derivatives) The two metrics determined by hardware characteristics are latency and point bandwidth. From the previous discussion we see moderate latency increases have a small effect but bandwidth decreases have a larger effect. Doubling latency reduces scalability to 7900 tasks (from 8500) while halving bandwidth reduces it to 5200 tasks. Results for some other values of W, problem size (PS), latency, and bandwidth are tabulated below.
213 Scalability of halo exchange with a problem size of 1 GWORD and halo Depth 4 (20km spaced global domain)
I
Latency=l
Latency- 10
~atency=50
Work=lOmemcopy
8975
8473
6858
Work=20memcopy
3 1457
28653
20924
Scalability of halo exchange with problem size of 4GWORDS and halo depth=4 (lOkm spaced global domain).
Work= 1Omemcopy Work=20memcopy
Latency=l 35898 125829
Latency=lO 33892 114893
Latency=50 27433 83696
As above but for interconnect bandwidth reduced 50%
Work=lOmemcopy Work=20memcopy
Latency=1 20710 71580
Latency= 10 20029 67825
Latency=50 17547 55457
Scalability of halo exchange with problem size of 4GWORDS and halo depth=l0 (l0km spaced global domain).
Work=lOmemcopy Work=20memcopy
Latency=1 5776 20322
Latency=10 572 1 20000
Latency=50 5490 18704
I
214
It is also useful to examine how a model slightly largcr than today’s might scale. The lGword example above corresponds to horizontal grid dimensions of 1000x2000. This is equivalent to about 20km horizontal resolution for a global domain grid point model. If we consider a 5km resolution, the problem size increases by a factor of 16 and the scalability with L=10, and interior work of 10 memcopies is 135K processors for the halo depth of 4 case and 22K processors for the halo depth of 10 case.
3. Spectral Methods Spectral methods at first seem more problematic than grid point methods since calculation of the basis function coefficients requires summation over the entire domain. In practice however these summations are done in one direction on the domain and then in the other. The requirement for a non communicating sum is then reduced to all of one dimension being available on each processor. The domain can then be transposed such that the second dimension is now completely available on each processor and only the first and third vary, After these second summations, a second transpose is often done so that the third vertical dimension is all available on each processor, simplifying calculations of model physics. Other methods are available; in particular 1D decompositions at low task counts are more efficient and prevail at NCEP, but the method described is believed to scale the best to large numbers of tasks. The fundamental message passing operation supporting this is the domain transpose. In contrast with grid point methods the entire domain is moved through the interconnect fabric rather than just region boundaries. Although it would seem that this would generate large messages and reduce latency issues, the requirement for communication with large numbers of processors (with the simple definition of transpose, ALL processors must communicate with each processor), implies large numbers of messages and this number increases with the square of the number of processors. The analytic model of this situation is as follows. Consider a domain of size D partitioned among N processors. The portion on each processor is DLN. The transpose algorithm reorganizes this sub domain into N buffers each of which is sent to a different processor, and then receives N new reorganized buffers from each of the N processors. Message size is thus
215
approximately D/N2. This decreases very rapidly with increasing N even for large problems. For example with a lGword domain on 5000 processors, each message is only 40 words or 160 bytes. This problem has not yet been encountered on the few hundred processors available at NCEP but will soon hinder scalability of single transposes on larger machines. Again define the scalability limit as that task count where communications overhead is the same as compute work. Define the compute work as W memory copies as before. The total time to perform a work + transpose operation is WD/NMr (interior work time) + ( (D/N)/N) *l/Sr*N +L*N (buffer send time for N messages) + ( (D/N)/N) *l/Sr*N +L*N ( buffer receive time for N messages)+ F*D/*N*Mr)*2 (buffer build time and buffer extract time) For the buffer sends and receives we are sending messages of (D/N)/N words to each processor. A total of N messages are sent (hence the subsequent multiplication by N) and N latencies are incurred (hence the L*N term.) The F*D/N*Mr term covers the time to build the buffers that will be sent and the time to extract new data from the received buffers. These do strided accesses to memory with a stride penalty F. The equation can be written more economically but the details of the terms are then lost. Simplifying yields Tstep= WD/(N*Mr)+2( D/(N*Sr) +2NL +2FD/NMr
(10)
The ultimate failure of this method at high task counts is not sensitive to switch bandwidth and we can assume an upper bound of Mr (memory rate) for the switch stream rate to obtain Tstep=WD/NMr +2(F+l)D/NMr +2NL
(1 1)
The communication and computation will be equal where WD/NMr =2(F+l)D/NMr +2NL Or more precisely, WD/Mr =-2(F+1) D/Mr -2N2L
(12)
216
N=,/(W-2F-2)0/2LMr
(14)
For typical values D=10**9, W=30, F=5, L=lOpsec Mr=250/usec we obtain N-= 1890 tasks. Even with W=50 and L=lpsec (very low for current machines) N is only about 8900 tasks. From (13) we see N scales with the square root of D, the square root of W and the inverse square root of L. Increasing D and decreasing L will thus have disappointingly small effects.
In contrast with the halo situation where some benefit is obtained even at very large task counts, there is a maximum N beyond which the time step time increases with task count. This is obtained by differentiating (1 1) to obtain Dt/DN= -WD/(N2Mr) -2(F+1)D/(N2Mr) +2L
(15)
Minimum time is obtained where this is zero or
N= ,/((W
+ 2F + 2)D/(2LMr)
For the case F=5, W=30, L=lOpsec, D=10**9, Mr=250/usec, the maximum task count is 2898 tasks which is again disappointingly small. Fortunately the summations in each dimension are separable and since summations are done in one direction, the transposc nccd not be done across all of them but only a 2D plane. If the domain is considered as a 3D XYZ volume, the single transpose can be replaced by Y parallel transposes on the XZ plane, X parallel transposes on the YZ plane or Z parallel transposes on the XY plane depending on the dimension of summation. For dimensions IX, IY, IZ, the message sizes are increased by a factor of IY, IX, and IZ respectively reducing the effect of latency on scalability at the expense of an extra transpose or possibly two transposes. For the limiting case of a cubic domain with IY=IX=IZ-D1’3 =X the transposes exchange with N/X processors rather than N. The message sizes increase by a factor of X and the number of messages, hence latencies, decreases by a factor of X. The effect on (1 1) is dramatic for large N and (15) becomes
N = ,/(W
+ 4F + 4)DX 14LMr
(17)
217
This effectively increases scalability by a factor of X”*/2 for large W. For small W, the factor approaches X’”. At large task counts with X -100 to 1000, this is significant. However it should be noted that the single transpose is replaced with two and the transpose cost is thus doubled at small task counts. At all task counts the transpose behaves as an extra 2F+2 memory copies of the entire domain even in the limiting case N=l and this is a large fixed cost. This method is not advisable for small task counts (<10**3 or so) and is not used at NCEP. One implementation of this method is described in [2]
4. Summary and Conclusions
Motivation for this work came from witnessing the common perception that scalability declines beyond 10**3 tasks were inevitable. This work has examined the scalability of the exchange methods required to support finite difference methods of forecast integration and the scalability of the transpose methods required to support the summations required for spectral methods of integration. These fundamental operations are simply expressed and can be analytically described with an algebraic model. Performance metrics and model decomposition parameters can then be adjusted and the model’s behavior examined in far less time than would be required to implement on various machines and test performance. In particular the surprising experience that extremely low latency was less important than first thought in the early 1990s, might have been anticipated with this form of analysis. The model shows that finite difference methods can scale to very large numbers of processors with scalability limited by the fundamental ratio of perimeter to interior points. Spectral methods which scale linearly to the several hundred to thousand or so processors available on NCEP platforms in 2006, will pose more challenges but these are tractable also. A second conclusion to be inferred is that scalability declines in grid point and spectral models at low (-100-1000 tasks) processor counts should not be treated as fundamental limits to scalability but should instead be treated as implementation design or optimization issues. The implementation should be examined for such issues as UO slowdowns (Input can be parallelized, output can be pipelined in parallel), many to one operations, or load imbalance. The issues describcd are major ones but they are not intractable and realization that the fundamental integration methods are very scalable should motivate addressing the remaining issues.
218
References [l] Kauranne, T. and Barros, S.R.M (1992): Scalability Estimates of Parallel Spectral Atmospheric Models, Proceedings of the 5a ECMWF workshop on the Use of Parallel Processors in Meteorology, ECMWF, November 23-27, 1992.
[2] Juang, H.M. and Kanamitsu, M: SThe Computational Perjormunce of the NCEP Seasonal Forecast Model on Fujitsu VPP5000 at ECMWF, Proceedings of the 9~ ECMWF workshop on the Use of HPC in Meteorology, ECMWF, November 13-17.2000.
INITIAL PERFORMANCE COMPARISON OF THE IBM P5-575 AND THE CRAY XT3 USING APPLICATION BENCHMARKS* MIKE ASHWORTH, GRAHAM FLETCHER Computational Science & Engineering Dept., Science and Technology Facilities Council, Daresbury Laboratory, Daresbury Science and Innovation Campus, Warrington, WA4 4AD, UK The U K s flagship scientific computing service is currently provided by an IL3M p5-575 cluster comprising 2560 1.5 GHz POWER5 processors operated by the HPCx Consortium. The next generation high-performance resource for UK academic computing has recently been announced. The HECToR project (High-End Computing Technology Resource) will be provided with technology from Cray Inc., a Cray XT4 system with an initial performance of some 60 Tflop/s.
In order to assess the performance difference we can expect in moving from HPCx to HECTOR, we present initial results of a benchmarking programme to compare the performance of the IBM POWER5 system and the Cray XT3 (the predecessor of the XT4) across a range of capability applications of key importance to UK computational science. We present low-level kernel data from the Intel MPI Benchmarks together with four full application codes from diverse application domains: environmental science, computational engineering, molecular simulation and computational chemistry. We find a range of performance with three out of four codes performing better on the Cray system; the exception being a turbulence modeling code which has very high memory bandwidth requirements. This code is also the only one to suffer a significant performance penalty moving from single-core to dual-core on the XT3.
1. High-end computing provision in the UK In the 1980s, high end computing (HEC) provision in the UK was dominated by vector systems. The aging CDC 7600s at the University of Manchester’s Regional Computing Centre (UMRCC) were replaced in 1982 by a CDC Cyber 205 to supplement the existing Cray X-MP/8 at the University of London Computer Centre (ULCC). The next national system was a Cray Y-MP/8 installed at Rutherford Appleton Laboratory with a peak performance of just 2.7 Gflop/s. The Cyber 205 at UMRCC was replaced in 1988 by an Amdahl VP1200 vector processor and ULCC’s Cray was succeeded in 1991 by a Convex
* This work is supported by the UK’s Engineering & Physical Sciences Research Council through
its Facilities Agreement with STFC’s Computational Science & Engineering Department.
219
220
C3860. Following a review that considered future scientific research and the potential of parallel computing, the break with vector systems was made in 1994 and a 256-processor Cray T3D was installed at Edinburgh Parallel Computing Centre (EPCC). This was when most user groups made the transition from serial code with vectorisable loops, possibly with some parallelism through Cray microtasking, to explicit parallel coding using the newly adopted standard, the Message Passing Interface (MPI). To assist this process the funding for the hardware was supplemented by specialist support through the High Performance Computing Initiative, with centres of expertise established at Daresbury Laboratory, EPCC and Southampton University. In 1997 the high-end capacity was supplemented by upgrading an existing Cray T3E-900 at EPCC for use by national users and in 1998, in a new tranche of funding, a Cray T3E-1200 was installed at the University of Manchester. This signalled the start of a strategy for HEC provision in the UK in which procurements would be started every three years with the aim of awarding a sixyear contract to a Consortium who would provide hardware, accommodation and management and Computational Science and Engineering (CSE) support. There would thus always be two concurrently active services, which would be both complementary and competing. The six-year contract attempts to give some continuity of tenure to experienced staff, whereas the rapid rate of development of high-end systems is allowed for by requiring three deliveries of hardware, at years zero, two and four, with the aim of doubling performance at each stage (not quite keeping pace with Moore’s Law or the TOP500). The 1988 service at the University of Manchester was the first of these sixyear contracts with systems operated by Computing Services for Academic Research (CSAR). As Cray Inc. was taken over by SGI, the T3E was eventually supplemented by an SGI Origin 3800 system, and finally replaced in 2003 by a 256-processor SGI Altix 3700 which, with further upgrades, reached 5 12 processors in 2004. The next contract was awarded in 2002 to HPCx, a consortium comprising Daresbury Laboratory, EPCC and IBM. The start of service in November 2002 featured an IBM p690 cluster with the SP Switch, which at 3.2 Teraflopk on the Linpack benchmark, was the first single system available to UK academics to have a sustained performance in excess of 1 Teraflopk The upgrade in 2004 increased the clock speed from 1.3 GHz to 1.7 GHz with p690+ frames but more importantly introduced IBM’s High Performance Switch, and increased the sustained performance to 6.2 Teraflopk A switch to POWER5 technology in 2005 led the way to a further doubling of performance to 12.9 Linpack Teraflopk in late 2006.
221
The High End Computing Group at the Engineering and Physical Sciences Research Council (EPSRC), as the managing agent for high-end computing in the UK, is leading the next supercomputing procurement. From October 2007, the High End Computing Technology Resource (HECTOR) will be the UK’s next generation high-end service. As with HPCx, there will be two upgrades during the lifetime of the service. The initial Phase I system will comprise 60 Cray XT4 dual-core cabinets (-60 TFlop/s) with 6 GB of memory per node. In addition, one cabinet (-2 Tflop/s) of the latest Cray vector system, code-named ‘Blackwidow’, will be available mid-2008. The Phase I1 upgrade will be based on the next generation AMD multi-core processors, which are expected to be quad-core, with 8 GB of memory per node, yielding a system with -250 Tflop/s performance. This time there have been separate procurements for the three different components of the E l 13 million provision i.e.
1. Hardware Technology (contract awarded to Cray Inc) 2. Accommodation and Management (contract awarded to UoE HPCx Ltd) 3. Computational Science and Engineering Support (contract awarded to NAG Ltd) Contracts with all three partners were signed in February 2007. The start of service is scheduled for 161hOctober 2007. The scale of this resource (1 1,328 processor cores) will see researchers having to scale their codes to efficiently use 1000s of processors within a single application. This represents a major challenge for code developers, performance specialists, and for post-processing the significant amounts of data that will be generated using HECTOR. An element of this process is the understanding of how codes currently attaining high levels of performance on up to 1024 processors of the HPCx system will transition to the new machine. We present initial results of an evaluation programme to compare and understand the performance of the IBM POWER5 system and the Cray XT3 across a range of capability applications of key importance to UK computational science. Codes are considered from a number of application domains: environmental science, computational engineering, molecular simulation and computational chemistry.
222
2. Performance comparison - IBM p5-575vs. Cray XT3
2.1. Description of the systems The HPCx service started with POWER4 technology in 2002, but has upgraded through POWER4+ to its current POWER5 configuration. Each POWER5 chip [ l ] contains two processors, together with the Level 1 (Ll) and Level 2 (L2) caches. Each processor has its own L1 instruction cache of 32 kB and L1 data cache of 64 kB integrated onto one chip. Also on board the chip is the L2 cache (instructions and data) of 1.9 MByte, which is shared between the two processors. Four chips (8 processors) are integrated into a Multi-Chip Module (MCM). Two MCMs (16 processors) comprise one node. Each MCM is configured with 128 MB of L3 cache and 16 GB of main memory. The total main memory of 32 GB per node is shared between the 16 processors of the node. At the time of this work the “Phase3” HPCx system had just been completed, comprising some 160 IBM eServer 575 compute nodes, each containing 16 1.5 GHz processors, giving a total of 2560 processors. Within a node, MCMs communicate via shared memory. Communication between nodes is provided by IBM’s High Performance Switch (HPS) [2]. A total of 4 internode links are available: each node has 2 network adapters, each with 2 links. Each node runs a single copy of AIX 5.3. Applications were compiled with xlf Fortran version 9 and the VisualAge C Compiler version 7.0 and linked with the MPI provided by the Parallel Operating Environment (poe) version 4.2. The Cray XT3 [3] is a massively parallel system combining commodity and open source components with custom-designed componcnts. The architecture is based on the Red Storm technology that was developed jointly by Cray Inc. and the U.S. Department of Energy‘s Sandia National Laboratories. The XT3 system is based around AMD Opteron 64-bit processors: the main system (palu) at the Swiss National Supercomputing Centre (CSCS) has 1664 2.6 GHz single-core Opteron processors. For the single-core vs. dual-core comparisons we also used the CSCS development system (gele) which has 2.6 GHz dual-core Opterons. Each compute node has 2 GB of main memory (so in the dual-core case this is shared between the two processors). Each Opteron processor is directly connected via the chip’s HyperTransport to a dedicated Cray SeaStar chip (based on the IBM POWER architecture). Each SeaStar contains a 6-Port router and communications engine and also provides a serial connection to the Cray RAS and Management system. The CSCS system is configured as a full threedimensional torus of size 9x1 2x16. Compute nodes run the UNICOSAc kernel. This is a light-weight single-task OS, minimizing 0s jitter effects, providing
223
contiguous physical allocations and no demand paging. This places some limitations on applications such as no support for threads, sockets, m a p , native VO, the fork call, the exec call and access to the /proc file system. Applications were run under version 1.4 of the programming environment, using version 6.1 of the PGI Fortran compiler and version 6.1 of the Portland Group C compiler. The “-03 -fastsse” Fortran compiler options were used. All jobs were run in small pages using the “-small-pages” option to the yod command as this has been found in most cases to be faster than the large page default [4].
2.2. Intel MPI benchmarks The Intel MPI Benchmarks (IMB) [5] is an open-source set of MPI benchmarks which may be used to measure the performancc of an MPI implementation. The software is available by free download from the Intel website as part of the Intel Cluster Tools release and was formerly known as Pallas MPI Benchmarks (PMB) from Pallas GmbH. The IMB-MPI1 test suite benchmarks point-to-point and collective communication operations. There is also an MPI-2 test suite for the extended functionality of the MPI-2 standard such as one-sided communications and file input/output. Version 2.3 was used for the tests reported here. Version 3.0 has been released subsequently which contains improved reporting and better error management plus an additional benchmark case for MPI-AIlToAllV, which was not present in version 2.3. Figures 1 and 2 show data from the IMB PingPong benchmark. On the IBM p5-575 the ping-pong was set-up to run between different nodes. In figure 1 we plot the time in microseconds for shorter messages, which shows the message latency or zero-message time. In figure 2 we plot the bandwidth in GB/s for the larger message lengths, which emphasizes the asymptotic communications bandwidth. Table 1 shows the latency and asymptotic bandwidth figures derived from these data, which in principle may be used to estimate the communications performance in real applications. The latency figures are remarkably similar and, although the IBM does have the edge with its superior bandwidth, in practice most applications which are communications-limited are limited by latency rather than bandwidth. We therefore expect the communications-dependent scaling of applications to be very similar on the two systems.
224
20 I
15
5
0 1 1
I
I
I
10
100 Message Length
1000
10000
Figure 1. Time in microseconds for the IMB PingPong benchmark as a function of message length in bytes for the Cray XT3 and IBM p5-575.
PingPong - Bandwidth
rn
z
5 1000 -~~~
'5 '0 S
m rn
500 _
1
_
_
_
_
_
~
~
100
~
~
~
~
10000
1000000
Message Length Figure 2. Bandwidth in MBls for the IMB PingPong benchmark as a function of message length in bytes for the Cray XT3 and IBM p5-575.
225 Table 1 . Latency and bandwidth figures for the Cray XT3 and IBM p5-575 derived from the IMB PingPong benchmark. Latency (P) 5.6 5.3
Cray XT3 IBM p5-575
Asymptotic Bandwidth (MBls) 1100 1560
The IMB benchmark also tests most of the collective communications available in MPI-1. In figure 3 we show comparative timings for an MPI-A11Reduce on 128 processors and in figure 4 those for an MPI-BCast again on 128 processors. It is remarkable how similar the timings are: for the AllReduce the Cray is somewhat faster except for smaller messages; for the BCast the IBM is slightly faster across the range.
100000 -
MPI-AIIReduce 128 procs
+IBM p5-575 10000
Time (us) 1000
l10 1
o 100
o 10000
F
1000000
Message Length (bytes)
Figure 3. Time in microseconds for the IMB MPI-AllReduce benchmark on 128 processors as a function of message length in bytes for the Cray XT3 and IBM p5-575.
226
MPI-Bcast 128 procs 10000
1000
Time (us)
100
10 1
100
10000
1000000
Message Length (bytes) Figure 4. Time in microseconds for the IMB MPI-BCast benchmark on 128 processors as a function of message length in bytes for the Cray XT3 and IBM p5-575.
2.3. The POLCOMS Coastal Ocean Model The Proudman Oceanographic Laboratory Coastal Ocean Modeling System (POLCOMS) has been developed to tackle multi-disciplinary studies in coastalhhelf environments and optimized to make use of high-performance parallel computers [ 6 ] .In order to improve simulations of marine processes, we need accurate representation of eddies, fronts and other regions of steep gradients. The current generation of models includes simulations which cover large regions of the continental shelf at approximately lkm resolution. The central core is a sophisticated 3-dimensional hydrodynamic model that provides realistic physical forcing to interact with, and transport, environmental parameters. The hydrodynamic model is a 4-dimensional finite difference model based on a latitude-longitude Arakawa B-grid in the horizontal and Scoordinates in the vertical. Conservative monotonic advection routines using the Piecewise Parabolic Method are used to ensure strong frontal gradients. Vertical mixing is through turbulence closure (Mellor-Yamada level 2.5). POLCOMS has been parallelized using grid partitioning in the two horizontal dimensions. A recursive bi-section algorithm is employed which balances the number of active sea-points across the processors in order to
227
optimize the load balance. Communications is managed by the well-known method of maintaining halo regions surrounding each sub-domain. The vertical dimension is kept local to each processor as this has performance advantages for processes which take place within the water column, such as convection, downwelling and vertical diffusion. As with many finite difference codes the computational load is heavy on accesses to main memory with limited cache reuse. Two benchmark cases were run in this study. The first is the MediumResolution Continental Shelf (MRCS) model. This covers the north-west European shelf seas at a resolution of 1/10 degree x 1/15 degree giving a grid size of 251 x 206 x 20. It is used as an operational forecast model by the UK Met Office. The performance is reported in Figure 5 as the amount of work (grid-points x timesteps) divided by the time. The scaling on the two systems is very similar with the XT3 having greater absolute performance rising from 1.34 times faster on 16 processors to 1.85 times on 512.
POLCOMS MRCS
; / ....
0 0 0 r v
- .,
.....
.
20
a, 0
C
Q
!
+IBM
E
p5-575
,
0
0
128
256
384
512
Number of processors Figure 5. Performance in model days per day for the POLCOMS MRCS model as a function of number of processors for the Cray XT3 and IBM p5-575.
228
3000
I
POLCOMS HRCS
I I
I
I
I
I
I
+IBM p5-575
I
I
I
I
I I
I I I I
0 0
I I I I
I I I I
I
I
I
I
256
512
768
1024
Number of processors Figure 6 . Performance in model days per day for the POLCOMS HRCS model as a function of number of processors for the Cray XT3 and IBM p5-575.
The second case is the High-Resolution Continental Shelf (HRCS) model. This is a much larger grid of size 1001 x 801 x 34 points representing a resolution of 1/40 degree x 1/60 degree covering a similar area. This is a research model used for investigations of the circulation of the north-west European continental shelf. Performance is shown in figure 6 out to 1024 processors on both systems. For this case Performance is almost identical except at 1024 processors where the XT3 achieves a 40% advantage. This may be due to cache effects as the sub-domain size on a single processor decreases with increasing number of processors and this interacts with the different-sized caches on the two systems.
229
0
256
512
768
1024
Number of processors Figure 7. Performance of the PDNS3D turbulent flow benchmark as a function of number of processors for the Cray XT3 and IBM p5-575.
2.4. PDNS3D - a Direct Numerical Simulation of Turbulence Fluid flows encountered in real applications are invariably turbulent. There is, therefore, an ever-increasing need to understand turbulence and, more importantly, to be able to model turbulent flows with improved predictive capabilities. As computing technology continues to improve, it is becoming more feasible to solve the governing equations of motion - the Navier-Stokes equations - from first principles. The direct solution of the equations of motion for a fluid, however, remains a formidable task and simulations are only possible for flows with small to modest Reynolds numbers. Within the UK, the Turbulence Consortium (UKTC) has been at the forefront of simulating turbulent flows by direct numerical simulation (DNS). UKTC has developed a parallel code to solve problems associated with shockhoundary-layer interaction [ 7 ] .
230
The PDNS3D code was originally developed for the Cray T3E and is a sophisticated DNS code that incorporates a number of advanced features: namely high-order central differencing; a shock-preserving advection scheme from the total variation diminishing (TVD) family; entropy splitting of the Euler terms and the stable boundary scheme. The code has been written using standard Fortran 90 code together with MPI in order to be efficient, scalable and portable across a wide range of high-performance platforms. The benchmark is a simple turbulent channel flow benchmark using a grid size of 360x360~360run for 100 iterations. The most important communications structure is a halo-exchange between adjacent computational sub-domains. Providing the problem size is large enough to give a small surface area to volume ratio for each sub-domain, the communications costs are small relative to computation and do not constitute a bottleneck and we see almost linear scaling from all systems right out to 1024 processors. Performance, shown in figure 7, shows the IBM p5-575 with an advantage which rises from 1.10 times at 32 processors to 1.43 times at 1024. Hardware profiling studies of this code have shown that its performance is highly dependent on the cache utilization and bandwidth to main memory [8].
2.5. The DL-POLY3 Molecular Dynamics Code DL-POLY [9] is a general-purpose molecular dynamics simulation package designed to cater for a wide range of possible scientific applications and computer platforms, especially parallel hardware. Two graphical user interfaces are available, one based on the CERIUS2 Visualiser from Accelrys and the other on the Java programming language. DL-POLY supports a wide range of application areas, including [ 101 ionic solids, solutions, metals, zeolites, surfaces and interfaces, complex systems (e.g. liquid crystals), minerals, bio-systems, and those in spectroscopy. Evaluation of the Coulomb potential and forces in DL-POLY is performed using the smooth particle mesh Ewald (SPME) algorithm. As in all Ewald methods, this splits the calculation into two parts, one performed in real space and one in Fourier space. The former only requires evaluation of short ranged functions, which fits in well with the domain decomposition used by DL-POLY 3, and so scales well with increasing processor count. However the Fourier component requires three-dimensional FFTs to be performed. These are global operations and so a different strategy is required if good scaling is to be achieved.
231 150
1
A
h
P
m 100 t
2
v
8S m
r
5
50
n
0 0
I
I
128
256
384
512
Number of processors Figure 8. Performance of the DL-POLY Gramicidin A benchmark as a function of number of processors for the Cray XT3 and IBM p5-575.
To address the limitations of commonly available FFT libraries, a parallel 3D FFT has been written [ l l ] which maps directly onto DL-POLYs data distribution; this involved parallelizing the individual 1D FFTs in an efficient manner. While the method will be slower than the proprietary routines for small processor counts, at large numbers it is attractive, since (a) while moving more data in total, the method requires much fewer messages, so that in the latency dominated regime it should perform better, and (b) global operations, such as the all-to-all operations are totally avoided. The method is extremely flexible, allowing a much more general data distribution than those of other FFTs, and as such should be useful in other codes which do not map directly onto a “by planes” distribution. Performance (defined as an arbitrary constant divided by the execution time) is shown in figure 8 for a simulation of a gramicidin A molecule in water: a total of 792,960 atoms. Scaling with the number of processors is very similar, with the XT3 outperforming the IBM p5-575 by factors ranging from 1.23 to 1.35.
232
2.6. The GAMESS-US Electronic Structure Code GAMESS [ 121 is a program for ab initio molecular quantum chemistry. Briefly, GAMESS can compute SCF wave-functions ranging from RHF, ROHF, UHF, GVB, and MCSCF. Correlation corrections to these SCF wave-functions include Configuration Interaction, second order Perturbation Theory, and CoupledCluster approaches, as well as the Density Functional Theory approximation. Nuclear gradients are available, for automatic geometry optimization, transition state searches, or reaction path following. Computation of the energy Hessian permits prediction of vibrational frequencies, with IR or Raman intensities. Solvent effects may be modeled by the discrete Effective Fragment Potentials, or continuum models such as the Polarizable Continuum Model. Numerous relativistic computations are available, including third order Douglas-Kroll scalar corrections, and various spin-orbit coupling options. The Fragment Molecular Orbital method permits use of many of these sophisticated treatments to be used on very large systems, by dividing the computation into small fragments. GAMESS can be compute- and memory-bound, often both simultaneously. As with many electronic structure packages, the computational of electron repulsion integrals overwhelmingly dominates the execution time of its key functionality. At the same time, the need for large storage arrays lies behind the current trend in distributed data algorithms where a major concern is the minimization of cormunication overheads. In GAMESS parallel distributed data algorithms rely on DDI (Distributed Data Interface) for their implementation, while the bulk of the linear algcbra is handlcd by the BLAS library. An in-house eigensolver based on the Eispack routine is used for matrices up to order of a few thousand after which a repeated matrix-multiply approach is adopted for methods in which only a handful of roots are required. There are plans to incorporate a distributed data solver such as ScaLapack in the near future. Mixed Si,C, clusters have long been a focus of interest in the fields of materials science and astrophysics, the latter due to the observation of small silicon-carbide clusters in interstellar space. Therefore, several experimental studies have been performed on silicon-carbide clusters in an effort to discover their geometries and electronic structures. Sic3 is of particular interest as there is experimental evidence for the presence of three isomers, two singlet cyclic rings with either two 2s or three 3s Si-C bonds and a linear triplet. The linear triplet conformation provides the scalability case study used here.
233 40
0 0
64
128
192
256
Number of processors
Figure 9. Performance (inverse execution time) of the GAMES-US Sic3 benchmark case on the Cray XT3 and IBM p5-575.
Performance (defined as an arbitrary constant divided by the execution time) for this case is shown in figure 9. Scaling with number of processors is similar with the Cray XT3 having an advantage of 1.29 times on 128 processors dropping to 1.19 times on 256 processors.
2.7. Single-core vs. Dual-core The trend away from single-core microprocessors towards systems in which two (dual-core) or more (multi-core) processors are integrated onto the same chip, leads to questions over the expected performance. The issue is that although the processors which are co-fabricated on a single chip are independent, in most architectures there are some shared resources: shared caches or shared data paths to main memory. We have compared single-core versus dual-core execution for both the Cray and IBM systems under test here. For the Cray XT4 this was tested by running on systems equipped with single-core or dual-core nodes. For the IBM we simulated single-core by only running on one core of the dual-core chip.
234
................
.....................................
0 0
16
r - - - - -
I
I
32
48
64
Number of processors Figure 10. Performance of the PDNS3D code in both single-core and double-core executions on the Cray XT3 and IBM 95-575.
Figure 10 shows performance results for the PDNS3D code. On the Cray XT3 performance is reduced to 62% running 64 MPI tasks when both cores are used whereas on the IBM POWER5 system this reduction is only 90%. However we have seen that PDNS3D places unusual demands on the memory system. For a more “normal” code, POLCOMS, this performance penalty on the Cray XT3 is only 95%.
3. Conclusions and Future Work We have presented initial results of a benchmarking programme to compare the performance of the IBM POWER5 system and the Cray XT3 using both lowlevel kernels and four full application codes from diverse application domains: environmental science, computational engineering, molecular simulation and computational chemistry. The IMB communications benchmarks show very similar performance characteristics. The message latencies are within 10% and although the bandwidth is better on the IBM we believe that for most applications it is the
235
latency that has a greater effect on performance. Global communication operations also perform very similarly. The conclusion that there is little to choose between these systems from the point-of-view of MPI communications is borne out when we turn to the applications: their scaling performance is very similar for all four applications. Three out of four applications perform better on the Cray XT3 with speed-ups ranging from 1.10 times to 1.85. The exception is a code which is known to be memory bandwidth limited for which the IBM is some 43% faster. Codes which stress the memory system are also likely to suffer a performance penalty when using dual-core systems. For most codes we can expect a per-processor performance improvement in moving from HPCx to HECToR of up to a factor of two. However, the HECToR system is intended to provide UK computational scientists with a resource with about a ten-fold increase in capability. The remaining increase has to be achieved by an increase in the number of processors in regular use by applications codes. The capability aspirations of UK scientists are therefore dependent on the scalability of codes up to, in the case of HECToR, around 10,000 processors. The challenge for the future is to develop an integrated programme of algorithm optimization and development to enable new scales of modeling and new generations of simulation to realize the potential of these large-scale systems.
Acknowledgements We are grateful to the Swiss National Supercomputing Centre (CSCS) for access to the Cray XT3 system. References 1. IBM POWER5 Chip: A Dual-Core Multithreaded Processor, Ron Kalla,
2. 3. 4.
5.
Balaram Sinharoy, Joel M. Tendler, IEEE Micro, Vol. 24, No. 2, pp. 40-47 (2004) An Introduction to the New IBM e-server pSeries High Performance Switch, IBM Red Books, SG24-6978-00 (2003) Cray XT3 System Overview, version 1.4, S-2423-14, Cray Inc. (2006) The Effect of Page Size and TLB Entries on Application Performance, Neil Stringfellow, Proceedings of the Cray User Group, 8'h-1lthMay 2006, Lugano, Switzerland (2006) Intel Cluster Toolkit 3.0 for Linux, Intel MPI Benchmarks, h~tp://www.intel.comlcd/soliwarc/products/asmona/eng/cluster/clustertoolkit/2 19848.htm
236
6. Optimization of the POLCOMS Hydrodynamic Code for Terascale HighPerformance Computers, M. Ashworth, J.T. Holt and R. Proctor, Proceedings of the 18th International Parallel & Distributed Processing Symposium, 26th-30th April 2004, Santa Fe, New Mexico (2004) 7. Direct Numerical Simulation of ShockBoundary Layer Interaction, N.D. Sandham, M. Ashworth and D.R. Emerson, http://www.cse.clrc.ac.uk/ce~/sbli.shtml 8. Single Node Performance of Applications and Benchmarks on HPCx, M. Bull, HPCx Technical Report HPCxTR0416 (2004) httr,://www.hucx.ac.uk/research/hvc/technicalre~orts/HPCxTR0416.~df 9. DL-POLY: A general purpose parallel molecular dynamics simulation package, W Smith and T R Forester, J. Molec. Graphics 14 (1996) 136 10. DL-POLY: Applications to Molecular Simulation, W. Smith, C. Yong and M. Rodger, Molecular Simulation 28 (2002) 385 11. A Parallel Implementation of SPME for DL-POLY 3, I.J. Bush and W. Smith, httD://www.cse.clrc.ac.uk/adfft.shtnil 12. General Atomic and Molecular Electronic Structure System, M.W. Schmidt, K.K. Baldridge, J.A. Boatz, S.T. Elbert, M.S. Gordon, J.H. Jensen, S. Koseki, N. Matsunaga, K.A. Nguyen, S.J. Su, T.L. Windus, M. Dupuis, J.A. Montgomery, J. Comput. Chem. 14, 1347-1363 (1993)
ROLE OF PRECISION IN METEOROLOGICAL COMPUTING: A STUDY USING THE NMITLI VARSHA GCM T . N. VENKATESH* and U. N. SINHA
Flosolwer Unit, National Aerospace Laboratories, P B 1779, Bangalore, 56001 7, India * E-mail:
[email protected] An issue in large scale computing of unsteady phenomena is the role played by round-off errors. These errors can accumulate in long integrations leading to major differences in the computed quantities such as rainfall. Computations with the same code at different precisions (single, double and arbitrary precision) have been made to assess the role of round-off. For the study, Varsha2C and VarshaX-MP, C and multi-precision versions of the spectral, hydrostatic Varsha GCM have been used. Varsha2C-MP uses the ARPREC library for computing using an arbitrary number of digits. We find that even for sequential runs, round-off causes solutions to differ beyond 12 days when two different processors are used. With the same processors, the results of parallel runs made using a different number of processors differ significantly after 10 days, due t o the round-off errors in the global summation operations. Use of double precision and changing the communication strategy reduces the differences by about 10 %. With the use of multi-precision, repeatable results, both across different processor types and different number of processors, can be obtained for up to 30 days of integration, provided 64 or more digits are used for the computing. For the Lorenz system of equations, however, precision has no significant effect on the results. Calculations using up to 2048 digits show that given the same initial perturbation, the time at which two solutions diverge (by order 1) does not depend on the precision. These results suggest that, for long integrations of meteorological models, far more careful computing is required than is generally appreciated
Keywords: Round-off errors, multi-precision computing, Varsha GCM
1. Introduction The field of meteorological computing is complex and has many dimensions to it. To obtain realistic and meaningful results one has to ensure that the physical model adopted, and the simplifying assumptions, numerical approximations made are reasonable; the algorithms and the computer code used are correct; and finally that the machine does give the intended
237
238
output. While there is an extensive literature on modelling and numerics, there is relatively less work on the code and machine dependent parts even though it is known to practitioners that some results are sensitive to the operating system and even the compiler optimizations used. With the use of massively parallel architectures and heterogeneous environments (in grid computing), the issue of reproducible results across different processor types and varying number of processors assumes even greater importance. The growth of round-off errors is not usually a problem for short (3 days) to medium range (7-10 days) integrations, but has to be addressed for longer (30 day, seasonal or climate) integrations. Since one is trying to isolate the effect of some small parameter, like variations in COZ concentration, it is important to ensure that there is no numerical noise clouding the results. We approach this problem by looking at what effect using a higher number of digits has on the results of 30 day simulations using the Varsha GCM of the National Aerospace Laboratories (NAL) Bangalore. The quantity of interest is the monsoon rainfall over India. The C version of the Varsha GCM was modified to enable multi-precision computing for this study. We find that the precision used has a significant impact on the computations beyond ten days of integration. The organization of this paper is as follows. A brief description of the Varsha GCM and its different versions is given in section 2 . The main results with the GCM computations are presented in section 3 and results with the simple Lorenz model in 4. The discussion and conclusions are in section 5. 2. Varsha GCM
The Varsha GCM code version 1.0. is a hydrostatic spectral general circulation model developed at the National Aerospace Laboratories (NAL) as part of the Government of India’s NMITLI project. The code has its roots in the GCM T-80 code of National Centre for Medium Range Weather Forecasts (NCMRWF), India. The NCMRWF code was in turn based on NCEP spectral model [l]. This code was ported to a UNIX platform from the CRAY specific vectorized code and parallelized for a distributed memory (MPI type library) by NAL [a] in 1993. It was subsequently re-engineered [3] using FORTRAN 90 and, as part of the NMITLI project, new radiation and boundary layer modules were added [4]. The model can be run at different spectral truncation as well as physical grid resolutions. The number of vertical layers is 18 and the physical parameterizations include the KuoAnthes cumulus scheme. The shortwave radiation is computed as described in Sinha et a1 1994 [2], while there are two options for the long wave radi-
239
ation computation: (i) The Fels-Schwarzkoff scheme and (ii) a new scheme devised for Varsha based on Varghese et a1 [5]. For the boundary layer the options are (i) the Monin-Obukhov scaling along with a gustiness parameter, and (ii) a new boundary layer scheme based on the scaling arguments of Rao and Narasimha [6]. In addition t o Varsha 1.0, Varsha2C, a new code written completely in ANSI C, has been developed. This version has the added feature of being able t o change from single t o double precision run with the change of a compile time flag. Another version the Varsha2C-MP, which can calculate quantities t o an arbitrary number of digits, is described in more detail in the next section. 2.1. Multi-precision version of Varsha GCM
#include cstdio.)u #include srtdlrb.h> #include "mpf.h'
1
void agcnlint * . i n t * I : int mainlint argc, Char *argv[l) {
i n t nuwrocs,nyid:
//
I
ON
precision
IS
contrc'tkd here
#ifdef MULTI-PRECISION #include "w-real.h"
*IS? Oifdef
f mrj real r&
DOUBLE-PRECISION
#else float
&
Fig. 1. Segments of the Varsha2C-MP code showing the changes required to use the multi-precision data type mp-real and control the number of digits.
Varsha2C-MP, the multi-precision version of the Varsha GCM, uses the ARPREC multi-precision library t o calculate quantities to an arbitrary number of digits. The ARPREC package can perform arbitrary-precision arithmetic [7]. It is written in C++ and is portable across many platforms. A new type call mp-real can be defined and all the usual arithmetic operations performed on variables of this type. The number of digits used for computation can be set at run time depending on the memory available. The main advantage is that existing codes can be converted to use this data
240
type with minimal changes. Since the ARPREC package can be linked to C++ programs, some minor changes were required to cast the Varsha2C into a C++ compatible code. Code segments showing the declaration of the mp-real data type and the function call t o set the number of digits are shown in Figure 1. While ARPREC and similar multi-precision packages have been used for studies in number theory and vortex sheet computations, to the knowledge of the authors, Varsha2C-MP is the first code of this complexity (spectral atmospheric GCM with over 25 thousand lines of code) to be software-engineered for arbitrary precision computing.
3. Results All India Rainfall - July 2005 30 day simulation
5
10
15
20
25
30
Day Fig. 2. Comparison of the all India rainfall computed with sequential runs of the Varsha 1.0 code on Intel Pentium and Xeon processors.
There are two reasons for using rainfall as a variable for studying the sensitivity of the computation to the precision used. It is first of all an important quantity during the monsoon in India [8]. Secondly rainfall parameterizations usually have conditional statements, table lookups (for the moist adiabat processes), making the computed rainfall a very sensitive quantity. The All India rainfall, which is the sum over all the grid-points within the country, further makes the variable even more sensitive to small shifts in the spatial patterns. Other quantities like wind velocities and tem-
241
perature are smoother, and differences are seen only after a longer period of integration. All the results reported here were started with the initial conditions for 1 July 2005. This month was characterized by two peaks in the rainfall over India. The first peak, was due to heavy rainfall over Gujarat in the first week and the second peak in the fourth week due to very heavy rainfall over Maharastra, in particular Mumbai. One month integrations were carried out. For the single and double precision runs, the horizontal resolution was 120 spectral modes and 80 km physical grid. All the computations were performed on the Flosolver series of parallel computers [9,10]. The Flosolver Mk6 which uses Intel Pentium processors and the Mk6-X (with Intel Xeon processors) were used. The compilers used were the Intel Fortran compiler for the Varsha GCM 1.0 and the GNU gcc for Varsha2C and Varsha2C-MP. All India Rainfall - Jul 2005 (Varsha2C) 30 day simulation ( Jingle precision) Ld
I I I I
I
8
I I I I
I
~
'
I
'
"
q
'
'
~
I
'
I
"
l
Fig. 3. Comparison of the all India rainfall computed using the parallel Varsha2C code at single precision on Flosolver MK6-X, when the number of processors is varied.
First the lack of repeatability even with a sequential code is shown. In fig. 2, the all India rainfall computed on Intel Pentium and Xeon processors with the Varsha 1.0 code is shown. One can see that the solutions diverge after around 12 days. The effect of using a different number of processors is seen from Fig. 3. With the Varsha2C code run on one, four and eight processors, the
242
difference in results may be seen from around day 8. The reason for this behaviour is due to the presence of round-off errors during the calculation of the global sums from the partial sums computed on each processor. The global sums are required during the transformation of prognostic variable tendencies from physical to spectral domain. This is confirmed by reduction of such deviations when a higher number of digits is used (results are presented in the following sections). 3.1. Eflect of double precision and ordered summation
The effect of using double precision is seen from Fig. 4. One can see that there are differences in results starting from around day 10. As compared to the single precision runs, there is an improvement] but not very much. All India Rainfall - Jul 2005 (Varsha2C) 30 day simulation ( Jouble precision) 20
9
15
,"10 e
d .-
2 5
5
10
15
20
25
30
Day Fig. 4. Comparison of the all India rainfall computed using the parallel VarshaZC code at double precision on Flosolver MK6-X, when the number of processors is varied.
A major portion of the communication in the Varsha GCM is taken by global sums and global maxima (MPI-ALLREDUCE calls). Various optimizations are possible for these global operations, depending on the machine architecture. However, these optimizations come at the expense of changing the order of summations] which can change the results. The role of various strategies of summation has been studied by He and Ding [ll].To reduce such errors, the communication calls were changed as follows. The global summations were carried out by transferring the partial sums
243 All India Rainfall - July 2005 (Varsha2C) 30 day simulation ( Single precision, Ordered Summation) 20
J
~
~
"
'
~
'
"
'
~
'
"
'
'
'
1
"
'
c
'
I
'
'
'
'
I
1 5
10
15
20
25
30
Day Fig. 5 . Comparison of the all India rainfall computed using the parallel VarshaZC code at single precision, with order of summation maintained, on Flosolver MK6-X, when the number of processors is varied.
All India Raidall - July 2005 (Varsha2C) 30 day simulation (Double preclsion, Ordered Summation) 20
15
-z$ E
10
'ae?;
Fig. 6. Comparison of the all India rainfall computed using the parallel VarshaPC code at double precision, with order of summation maintained, on Flosolver MK6-X, when the number of processors is varied.
to one processor, performing the summation and then broadcasting the result to all the processors. This is of course an inefficient method but
244
reduces the errors introduced due to changing the order of summation. The results using this strategy are shown in figures 5 and 6. The spread between the computations on a different number of processors reduces, but still there are significant differences after around 12 days. 3 . e
We now report results using the Varsha2C-MP code. All the runs were started with the initial conditions for 1 July 2005. For the multi-precision runs, the horizontal resolution was 60 spectral modes and 150 km physical grid. 32, 64 and 128 digits were used. This lower resolution was due to limitations of the memory and the need to get the results in a reasonable amount of time. The 128 digit runs for example took around 10 days of real time. All India Rainfall - July 2005 (Varsha2C-Varsha2C-MP) 30 day simulation ( 4 PEs)
A
5
10
15
20
25
30
Day Fig. 7. Variation of the all India rainfall computed at different precisions, using the parallel VarshaZC-MP code and four processors.
The global summations were carried by transferring the partial sums to one processor, performing the summation and then broadcasting the result to all the processors, since the library does not handle operations with the mp-real data-type at present. The effect of precision for a fixed number of processors (four) is shown in Fig. 7. While the single and double precision curves vary, there is convergence for 64 and 128 digits.
245
-
5-
1
1
1
1
1
1
1
1
1
1
1
1
1
,
1
1
1
1
1
1
1
,
,
,
,
Fig. 8. Variation of the all India rainfall computed using different numbers of processors and different precision levels (64 and 128 digits) with the parallel Varsha2C-MP code.
All India Rainfall - July 2005 (Varsha2C-MP) 30 day simulation ( MP-64)
=
5l
Pentium 16 PEs
Fig. 9. Comparison of the all India rainfall computed at a precision of 64 digits, on four Xeon processors and sixteen Pentium processors.
The dependence on the number of processors for precision levels of 64 and 128 digits is shown in figure 8. One can see that repeatable results are obtained in contrast to the single and double precision runs. There are only minor differences (around 5 %) in the computed rainfall beyond 25 days.
246
In figure 9, a comparison of the rainfall, computed using 64 digits, on four Xeon processors and sixteen Pentium processors, is made. The agreement between the results, made on different processor types and different number of processors is in contrast to the divergence for sequential runs using single precision (Fig. 2 ). 3.3. Spatial patterns
Contours of rainfall for different precisions are shown in figures 10, 11 and 12. After ten days, there are only very minor differences between the doubleprecision and higher precision runs. After twenty days of integration, differences between D P and MP-128 (taken as reference) are evident. By thirty days, MP-32 is different significantly. There are some differences between MP-64 and MP-128 runs also.
-I-*,,,
"
Fig. 10. Rainfall contours after ten days of integration with the Varsha2C-MP code at different precisions. DP : Double precision, MP-32 : 32 digits, MP-64 : 64 digits and MP-128 : 128 digits.
247
Fig. 11. Rainfall contours after twenty days of integration with the VarshaZC-MP code at different precisions. D P : Double precision, MP-32 : 32 digits, MP-64 : 64 digits and MP-128 : 128 digits.
-1
MP-32
MP-
Fig. 12. Rainfall contours after thirty days of integration with the Varsha2C-MP code at different precisions. DP : Double precision, MP-32 : 32 digits, MP-64 : 64 digits and MP-128 : 128 digits.
248
4. Tests with the Lorenz model
The Lorenz system of equations [12] is very well known and led to the rapid growth of the field of chaos. The “butterfly effect”, where in a small change in the initial conditions can lead to order one changes in the solution after some time, has had a major influence on how both the scientific community and the general public view weather and climate. The Lorenz syst also been used widely for a number of studies, including some of the rece, innovative works in data assimilation [13]. The motivation for the study was the non-repeatability of numerical results on the computer, when a model run was restarted. The differences between the original computation and the one following a restart was due t o the truncation of the number of digits when the results were stored. This small perturbation in the initial conditions (for the run started from the stored values) grew rapidly and made the results differ from the original single run. In the previous section, we have shown that repeatable results can be obtained using a higher number of digits. The deviation due to round-off is clearly isolated. It is natural to ask what effect precision has when there is a perturbation of the initial conditions (as would be done in ensemble simulations). Such simulations with the Varsha GCM are computationally expensive and will be carried out at a later date. Meanwhile, tests with the Lorenz model have been done and are described in this section. The Lorenz system of three coupled ODES are
i = s ( y - ). y = rx - y - xz
i = XY
-
bz
(1)
Here x, y and z are the dependent variables and s , r and b are parameters. The usual values of these parameters are s = 10, r = 28 and b = 8/3. The method of solution is to start with some initial values of x, y, z and integrate the coupled ordinary differential equations using a time integration scheme. The fourth order Runge-Kutta scheme has been used here. If one considers x, y and z to be the co-ordinates of a point in three dimensions, the trajectory is of interest. This is a function of the initial conditions and the parameters s , r and b. For most initial conditions, after a certain time, the trajectory lies in a region of space which is independent of the initial conditions. This set is called an attractor. It is useful to see how the trajectories of two initial conditions which
249
Senisitivity to initial conditions Double precision (EPS = 1.0e-10) 30
20
r
-30
I
I
I
I
I
I 10
I 20
I
30
I 40
I 50
Time Fig. 13.
Sensitivity of the solution to small changes in the initial condition. The values
of z co-ordinate as function of time is plotted for the control and perturbed (~(0)= z ( O ) , , , ~ ~ ~ ~ 6) run. The double precision results are shown here.
+
differ by a small amount behave with time. The sensitivity of the solution to small changes in the initial condition, with calculation made at doubleprecision, is shown in figure 13. For the control run the initial values of Z,y and z were 1,1,1respectively. For the perturbed run, ~ ( 0=) z(O),,,~~~~ 6 and S was set to 10-l' for most of the studies. One can see from the figure that the solutions diverge after around t = 32. Similar computations were made doubling the number of digits successively. Up to 2048 digits were used. For calculation with higher precision, the results are very similar to the double-precision results. The times at which solutions (control and perturbed runs) diverge by an amount 6 (1.0 x lop2 ) was found to be 31.89 for calculation at all the precision levels (from double precision upto 2048 digits). It is to be noted that the the time step is 0.01. This is not to say that precision has no effect on the solution. From table 1 one can see that there are changes in the values of the solution computed with double precision as compared to the 2048 digit computation at t = 30 and and t = 40. The nature of the solution (where chaos starts) does not change with the precision. If 6 is decreased, the time at which there is significant departure increases.
+
250 Table 1 .
I1P-'LOIS
I
20.000
30.000
13.4729057 1339619
9.8668592~13550123
I
40.000 -8.743318752850055 -8.534596463420804 -8.534596443596334 -8.534596443596334 -8.534596443596334 -8.534596443596334 -8.534596443596334 -X.S345!)G 113596334
5. Discussion and conclusions A complete atmospheric GCM, Varsha2C-MP7 with the capability of performing calculations with any desired number of digits (within memory and CPU time limitations) has been developed at NAL. This code is a useful tool t o study the effects that round-off errors can have on the computations and can be used for integrations where high precision is required. Computations with the Varsha2C-MP code clearly show that by using a higher number of digits, the spread of round-off errors can be minimized. It takes 64-digits precision for one month simulations t o be done with repeatable results both across different processor types and number of processors. The implications of this study are as follows: Round off errors cannot be ignored in long-term GCM simulations (such as seasonal runs) as they can cause significant differences in the computed atmospheric fields, especially rainfall depending on processor type and number of processors used in a parallel machine. The deviations in the numerical solution due t o round-off errors, while significant, are not like the deviations seen in chaotic systems. For the simple Lorenz model, precision has a minor effect but does not change the nature of the solution. For complex systems one must be careful t o distinguish between intrinsic chaotic behaviour of the system and deviations due t o round-off errors. This study has demonstrated that multi-precision computing provides a means of distiguishing between the two.
Acknowledgments This work was carried out under the Government of India's NMITLI project. The financial support and technical inputs provided by the NMITLI cell, CSIR, is gratefully acknowledged. The authors acknowledge the encouragement of Dr. A. R. Upadhya, Director NAL. It is a pleasure t o ac-
251
knowledge Prof. R. Narasimha for his support and advice through out this programme and critical suggestions regarding the manuscript. The authors further wish to thank all the colleagues at the Flosolver Unit, NAL who have helped in various ways. References 1. E. Kalnay, J. Sela, K. Campana, B. K. Basu, M. Schwarzkopf, P. Long, M. Kaplan and J. Alpert 3, National Centers for Environmental Prediction, Washington DC, USA, (1988). 2. U. N. Sinha, V. R. Sarasamma, Rajalakshmy S., K. R. Subramanian, P. V. R. Bharadwaj, C. S. Chandrashekar, T. N. Venkatesh, R. Sunder, B. K. Basu, S. Gadgil, and A. Raju, Current Science, 67,No.3, 178-184, (1994). 3. R. S. Nanjundiah and Sinha, U. N., Current Science, 76, 1114-1116 (1999). 4. U. N. Sinha, T. N. Venkatesh, V. Y. Mudkavi, A. S. Vasudeva Murthy, R. S. Nanjundiah, V. R. Sarasamma, Rajalakshmy S., K. Bhagyalakshmi, A. K. Verma, C. K. Sreelekha, and K. L. Resmi, NAL PD FS 0514,National Aerospace Laboratories, Bangalore, India (2005). 5. S. Varghese, A. S. Vasudeva Murthy and R. Narasimha, Journal of the Atmospheric Sciences, 60,2869-2886, (2003). 6. K. G . Rao, and R. Narasimha, 2006, Journal of Fluid Mechanics, 547,115135, (2006). 7. http://crd.lbl.gov/-dhbailey/dhbpapers/arprec.pdf 8. U. N. Sinha, T. N. Venkatesh, V. Y. Mudkavi, A. S. Vasudeva Murthy, R. S. Nanjundiah, V. R. Sarasamma, Rajalakshmy S., K. Bhagyalakshmi, A. K. Verma, C. K. Sreelekha, and K. L. Resmi, NAL PD F S 0 6 0 8 , April (2006). 9. U. N. Sinha, M. D. Deshpande and V. R. Sarasamma, Supercomputer, 6,No. 4, (1989). 10. T. N. Venkatesh, U. N. Sinha and R. S. Nanjundiah, in ”Developments in Terracomputing” , World Scientific, Proceedings of the Ninth ECMWF Workshop on the Use of High Performance Computing in Meteorology held at Reading, England during November 2000. 62 - 72, (2001). 11. Y. He and C. H. Q. Ding, Proceeding of the ninth ECMWF workshop on the use of HPC in meteorology, World Scientific, 296 - 317, (2001). 12. E. N. Lorenz, Journal of the Atmospheric Sciences, 20, 130-141, (1963). 13. E. Kalnay, M. Pena, S.-C. Yang, and M. Cai, Proc. ECMWF Seminar on Predictability, Reading, U.K. (2002).
PANEL EXPERIENCE ON USING HIGH PERFORMANCE COMPUTING IN METEOROLOGY GEORGE MOZDZYNSKI European Centre for Medium-Range Weather Forecasts Shinfield Park, Reading, Berkshire, RG2 9AX, U.K. E-mail: George.Mozdzynski@ecmw$ int
As has become customary at these workshops, the final session was dedicated to a panel where brief statements raising fundamental or controversial issues were made to open the discussions.
PetaFlop computers are coming, but are we ready to use them? “We have some major problems, a lot of codes we use today in the scientific community are ten to twenty years old; and we know it takes ten years or so of development before new codes become fully productive; so the codes that are going to run on petaflop machines are these existing codes and not new ones. This is a train wreck coming; we are not (at least initially) going to be able to exploit the full power of petaflop machines; with all the resolution you really want; with all the effects that you know to be important; using accurate solution algorithms to model the full system, the whole planet, a full aircraft, etc. So we need to find ways to make the whole development process more efficient, better tools, better organisation, and access to computers as part of the development. There seems to be very little direction to look at all of the activity as an organic whole, requiring computers, codes, validation and users who can interact very well and learn from the data. It’s all very much piecemeal. Large funding authorities in the USA as an example are making a big effort to install new supercomputers but not much effort is going into developing software. To some extent we are going to waste a raw opportunity.”
“I agree this is a software issue, the hardware is coming, but all the money is spent on the hardware and not much money is left over for software. In Europe, at the start of procurements there is a lot of talk about supporting software efforts, but by the time we get to purchasing the machine the money is so tight that it all goes into hardware and the software effort gets lost.” “For the NWP community, looking at how the NWP models were developed over the past forty years, this community was always trying to use the capability that already existed. Meteorological centres couldn’t really evolve faster than they did; they were strongly bound by developments in science, technology and
252
253
industry. There is a path that is pre-described and they have to follow that. Using petaflop computers if they are too early doesn’t help too much.”
“I disagree, there are plenty of areas where we are constrained by the science, but we know very well how to run models with resolutions that are finer than a 25km grid. This has been done already in limited area models today at 10 km, so I am quite convinced that we could use a petaflop computer to run a l O k m model with the physics and dynamics we use today for our 25km model, if it was there and having a moderate number of processors. So I am quite convinced that we are NOT constrained by the science when it comes to the resolution.” “How many of these meetings are we going have before one of us will have access to a petaflop computer? Maybe it is a little early to be discussing such issues.”
“3, 4 maybe 5 meetings.” “I guess the question is whether we need petaflop computers for a single application, and I suspect we don’t, cven if we increase our resolution to 10 km” “No, I was thinking of utilising effectively a petaflop computer across our range of applications, with a high resolution forecast at lOkm, with an ensemble forecast workload at 20km or 25km.”
“I think we can use a petaflop computer. What we have looked at is the extent to which we can run a complicated convective situation such as where a supercell will arise over the USA, it is an ensemble but you want to run it with a grid of a few hundred metres over large domains, and run a number of these to get the spread. We have a150 done timings for runs of non-hydrostatic models at high resolution of 1 or 2 km to see how well we could simulate a timed history of the Madden-Julian oscillation over a large part of the Pacific. The question that we have studied is, can you get as much out of a parameterized convection or do you need the real convection with things like slope downdrafts and so on? A tentative answer is that you get a better answer with the 100 to 500m grid over large domains. So we would want to have a petaflop computer and have real applications that we would at least like to run.” “I think that compared to almost all other computational science/engineering communities the weather community is pretty mature. You have somebody who understands what the value is, the’re putting money into it, life isn’t perfect, you don’t have all the money you want, but you have a proud record, a steady performance, you are in an evolutionary mode, you are improving things. In contrast, there are whole communities out there where this is a new paradigm of using big-scale computing to solve their problems. So they have a huge start up cost, getting people to understand the value, sponsors to put money in, building
254
up a community of people who are competent to develop large codes to solve their problems, and so forth, and they have a more daunting challenge.”
“I think there are more communities than that which are in good shape and have single applications that would scale to petascale levels. I’m thinking of computational fluid dynamics, flow over whole aircraft, aerial acoustics, combustion, material, computational cosmology. They have the scientific objectives which require petascale computing and I believe the effort that is required is on the development of software.” “I agree that there are probably at least 15 to 20 suites of applications in all the areas you mentioned that have potential to make use of petascale computing relatively quickly.” “If a tenth of the money that was put into grid computing went into this area then we could make a real effort.” “Will we be able to rework our codes to resolve imbalance as we scale to petascale systems with the large number of processors that will be required?” “This same conccrn was raised when we were trying to get to a sustained Gflop. Well we got through that ‘sound barrier’. There will always be barriers that people say you cannot go through.”
Technologies for future supercomputers The following statements introduced this topic, Can we utilize hardware accelerators, e.g. Cell, FPGAs etc.? Will future processors all have vector capability? How do we deal with energy issues? “Wc looked at parallel computing in the 1980’s when we could see that MPP systems were coming along. I remember a lot of people at that time were very sceptical that the ‘Killer Micros’ would take over, especially when you looked at the programming models that were available at that time. Well if we look at it today, they did take over, not micros, but reasonably powerful systems.” “Accelerators are not new, floating point accelerators were around in the 1970’s, but they have never been widely used, the main problem being the programmability of these systems. Unless something can be done to make this simpler they will not be used in this marketplace. I don’t think that providing a set of libraries that you can call to use these accelerators is the answer. Unless this changes, accelerators will remain a niche market.”
255
“Scalar processors already have SSE instructions with a vector length 2 (64 bit) or 4 (32 bit), will this naturally evolve to provide a general vector capability to speed up our codes?” “The SSE instructions were provided to speed up graphics, and are unlikely to be extended further to be more generally used. Rather than looking at the CPU side it would probably be more productive to speed up memory access which is the key for performance on vector systems.” “The point really comes down to where the memory is located, and whether a memory hierarchy is used. Over the years we have got used to a flat memory on vector systems so programming on a system with a memory hierarchy would be a nightmare.” “The gaming industry has highly skilled developers, they have time and money to invest in technologies that would push the hardware more towards accelerators rather than hardware that we would like. So my question would be which kind of games would we like them to produce in order to get the hardware that we would need?” “We had a guest speaker two years ago at this workshop who said that we should be more proactive in telling the hardware vendors what we want. The alternative is to leave all the decision making to them and then being surprised that we have to program a future cell-like processor because that may be the only hardware that would be cost effective.” “If we installed our supercomputers in Alaska we could make huge savings in the costs of cooling of these systems. Alternatively couldn’t we direct the heat generated by our computers in winter to local housing for a nominal charge.” “We actually did a study at ECMWF on something like this. The problem is like British Rail not being able to run trains because of the wrong kind of snow on the tracks; we havc thc wrong kind of heat. It would cost more to convert this heat into a form that could be used for local housing. As for Alaska, yes it’s cold but you also have to consider humidity.” “This is one area where the marketplace is playing a role. Vendors are finding that sites don’t want to cope with the MW’s, the power bill, the extra 40% of energy it takes to extract the heat from the computer halls. 1 suspect it will get worse for a while because nobody has thought about power efficient memory, they are only beginning to think about power management of chips, to shut down rather than tolerate the leakage. There are a number of proposals, one of them being to standardise on the voltage (5V) and to put the power supplies at the bottom of the rack, well this would give a 40% saving.”
256
“The first predictions for a large US government supercomputer procurement were 50 MW, the vendor was told to go back and try again.” “It is unlikely that large centres in six years time will have 20 MW facilities, their funding authorities will not allow it, and also the energy costs would then be greater than the capital costs of the supercomputer over its lifetime.” “Could there be savings by locating supercomputers next to power stations, to save on transmission costs?” “Commenting on the above proposal, this is already being done by some large organisations, by placing their large computer farms with very little human involvement close to hydro-electric power facilities.” “Having computers located 100’s of miles away from our users is quite acceptable, our primary computer is located 150 miles away and our backup 10 miles away.”
Do we need Fortran 2003 today?
“If we are going to use Fortran in the future then there is a lot of interest in Fortran 2003. Areas of interest are allocatable components of derived type, dynamic type allocation, improved integration with the host operating system such as for environment variables and access to command line arguments. But what is stopping us from using these features is we would have to be confident that all major hardware vendors support such features before we start using them.” “To my knowledge there is no fully compliant Fortran 2003 compiler.” “As a compiler vendor we asked users on whether they wanted Fortran2003 most replied that they were happy with Fortran95 and saw no need for Fotran2003. When Fortran90 first appeared, a Fortran77 do loop ran 5 times faster than the equivalent in Fortran90 array syntax, a situation we absolutely want to avoid in the future”. “We had a similar problem with MPI-2, where most supercomputer centres requested MPI-2 but didn’t make it a mandatory requirement. The consequence of this was that we as a vendor only implemented the main features that users really wanted, and a full implementation got deferred to a future time.” “But if we made MPI-2 a mandatory requirement we could find ourselves in a similar situation to Fortran2003, and have to exclude vendors due to a lack of a full implementation.”
257
“Could we not propose a date to vendors for full Fortran2003 compliance?” “Yes, it will be 2013, given past experience.’’
Our codes have hundreds of person years of development invested in them. Can we afford (not) to re-write our applications in new languages? Proposed languages for HPCS: Chapel - Fortress - X10 “None of you is going to use a compiler that is only available on one hardware platform, with no assurance the hardware will exist in 5 years time. The community is starting to listen to this message and are doing what they can to combine these compilers and take the best features of all of them. On the other hand, unless we try to address the parallelisation issue at a higher level of abstraction the situation is going to get worse as we go forward. So how do we get around this issue where we get new languages to deal with real codes; which is the only way we can make progress. This is a real problem. No one wants to be the person to stand up and work hard with this to do something that’s not going to benefit them immediately. The only solution we have come up with is to really encourage some developers of large codes to use these new languages on new hardware, give them some additional funding, provide some extra people to try this and do this in a way that will not impede the progress of these codes.” “We shouldn’t be rewriting our codes, but should put more effort into compiler development, and provide guidance to compilers in the form of directives for the features that are needed by the NWP community.” “We should consider rewriting our applications in Coarray Fortran.” “Who knows anybody who has the resources to do this?” “It was done for the VARSHA GCM.” “It doesn’t make any sense to move away from standard Fortran.”
How do we deal with massive amounts of data created? The roundtable discussion and workshop closed before this item could be discussed - an item for the next workshop in Oct/Nov 2008?
This page intentionally left blank
LIST OF PARTICIPANTS James Abeles
IBM 305 1 Brownstone Ct, Burtonsville, Md 20866, USA jabeles @us.ibm.com
Mike Ashworth
CCLRC Daresbury Laboratory Warrington, WA4 4AD, United Kingdom
[email protected]
H m i Auvinen
Lappeenranta University of Technology PO Box 20, FIN-5385 1, Lappeenranta, Finland
[email protected]
Saulo Barros
Universidade de Sao Paulo (Instituto de Matcmatica e Estatistica) R. do Matao, 1010,05508-090 Sao Paulo, Brazil saulo @ime.usp.br
Toine Beckers
DataDirect Networks Inc Callistusstr. 10, NL 6467 CD, Kerkrade, The Netherlands tbeckers @datadirectnet.com
Tom Bettge
National Centre for Atmospheric Research 1850 Table Mesa Drive, Boulder, CO 80305, USA bettge @ucar.edu
Joachim Biercamp
DKRZ (Deutsches Klimarechenzentrum) Budesstrape 55, D-20146, Germany
[email protected]
259
260
JosC Luis Blanco
Instituto Nacional de Meteorologia (INM)
Apartado de Correos 285,28071, Madrid, Spain,
[email protected] David Blaskovich
JBM 5 19 Crocker Avenue, Pacific Grove, CA 93950, USA
[email protected]
Richard Bosworth
EUMETSAT Am Kavalleriesand 3 1 , D-64295 Darrnstadt, Germany
[email protected]
Reinhard Budich
Max Planck Institute for Meteorology Bundesstr. 53, D-20146 Hamburg, Germany reinhard.budich @zmaw.de
David Burridge
World Meteorological Organisation c/o Manderley, Chapel Drive, Beech Hill, Reading RG7 2BH, United Kingdom d.m.burridge @ btinternet.com
Ilene Carpenter
Silicon Graphics Inc 2750 Blue Water Road, Eagan, MN 55 121, USA ilene @ sgi.com
Bob Carruthers
Supercomputer Solutions Ltd 91 Middle Deal Road, Deal, Kent, CT14 9RZ, United Kingdom
[email protected]
Zoe Chaplin
University of Manchester Kilburn Building, Oxford Road, Manchester, M 13 9PL United Kingdom zoe.chaplin @manchester.ac.uk
261
Mark Cheeseman
Swiss National Supercomputing Cente Galleria 2, Via Cantonale, 6928 Manno, Switzerland
[email protected]
Gerard0 Cisneros
SGI Av. Vasco de Quiroga 3000, Col. Santa Fe, 01210 Mexico, D.F., Mexico gerardo @ sgi.com
Thomas Clune
NASNGoddard Space Flight Center Code 6103, B28-W135B, Greenbelt, MD 2077 1, USA thomas.l.clune @nasa.gov
James Coomer
Sun Microsystems City Gate, Phase 1, Cross Street, Manchester, M33 7JR, United Kingdom james.coomer @ sun.com
Luca Corbeil
Environment Canada 212 1 Trans-Canada Highway, Dorval, Quebec, Canada
[email protected]
Roland Cotariu
National Meteorological Administration Sos. Bucuresti - Ploiesti 97,013686 Bucharest, Romania
[email protected]
Carlo Dessy
Consorzio SAR Sardegna viale Porto Torres, 119,07100, Sassari, Italy dessy @sar.sardegna.it
Thomas Diehl
GESTRJMBC, NASA GSFC Mail Code 613.3, Greenbelt, MD 20771, USA
[email protected]
262
Jean-Frangois Estrade
MttCo-France 42, av. G. Coriolis, 31057, Toulouse Cedex, France jean-francois.estrade @ meteo.fr
Luca Fadda
Consorzio SAR Sardegna viale Porto Torres, 119, 07 100, Sassari, Italy
[email protected]
Juha Fagerholm
CSC, PO Box 405 (Keilaranta 14) F1-02101 Espoo, Finland
[email protected]
Torgny Faxtn
National Supercomputer Center Linkoping University, S-58 1 83, Linkoping, Sweden
[email protected]
Rudi Fischer
NEC High Performance Computing Europe Prizenallee 11, D-40549 , Dusseldorf, Germany
[email protected]
Rupert Ford
University of Manchester Room IT405, Kilburn Building, Oxford Road, Manchester, MI 3 9PL United Kingdom, rupert.ford @manchester.ac.uk
Marie-Alice Foujols
IPSL Case 101, UPMC, 4 place Jussieu, 75272 Paris Cedex 5 , France
[email protected]
Toshiyuki Furui
NEC 7-1, Shiba 5-chome, Minato-Ku, Tokyo 1088001, 108-8001 Japan
[email protected]
263
Philippe Gire
NEC High Performance Computing Europe "Le Saturne", 3, Parc Ariane, 78284 GUYANCOURT Cedex, France pgire @hpce.nec.com
Mark Govett
NOAA Earth System Research Laboratory 325 Broadway, Boulder, CO 80305, USA mark.w.govett@ noaa.gov
Don Grice
IBM 2455 South Road, Poughkeepsie, NY, USA
[email protected]
John Hague
IBM Room 082, ECMWF, Shinfield Park, Reading RG2 9AX, United Kingdom ibj @ecmwf.int
Richard Hodur
Naval Research Laboratory 7 Grace Hopper Avenue, Monterey, California, 93943-5502, USA
[email protected]
Peter Johnsen
Cray Inc. 1340 Northland Drive, Mendota Heights, MN 55 120, USA pjj @cray.com
Hans Joraandstad
Sun Microsystems 2, Rue Jargonnant, Geneva, CH- 1207, Switzerland
[email protected]
Li Juan
National Meteorological Information Center 46 Baishiqiao Rd, Beijing, 10008 1, PR of China,
Henry Juang
NOAA National Weather Service 5200 Auth Road, Room 207, Camp Springs, MD 20746-4304, USA
[email protected]
264
Dave Jursik
IBM 15300 SW Koll Parkway, MS: Wi12-01 Mobility, Beaverton, OR 97006, USA
[email protected]
Tarmo Kaldma
Estonian Meteorological and Hydrological Institute Toompuiestee 24, Tallinn, 10149, Estonia
[email protected]
Tuomo Kauranne
Lappeenranta University of Technology PO Box 20, FIN-5385 1, Lappeenranta, Finland
[email protected]
Al Kellie
National Centre for Atmospheric Research 1850 Table Mesa Drive, Boulder, CO 80305, USA kellie @ucar.edu
Hee-Sik Kim
Cray Korea/ESRC (Earth System Research Center) c/o Supercomputer Center, KMA, 460-1 8 Shindaebang-dong, Dongjak-gu, Seoul 156720, South Korea hskim @cray.com
Nam-Ouk Kim
Korea Meteorological Administration 460- 18 Sindaebang-dong, Dongjak-gu, Seoul 156-720, Republic of Korea wyang @cray.com
Seong-jin Kim
Korea Meteorological Administration 460- 18 Sindaebang-dong, Dongjak-gu, Seoul 156-720, Republic of Korea
[email protected]
Luis Kornblueh
Max-Planck-Institute for Meteorology Bundesstr. 53, D-20146 Hamburg, Germany
[email protected]
265
William Kramer
NERSCLBNL 1 Cyclotron Road, MS 50B-4230, Berkeley, CA 94720-8 139, USA kramer @nersc.gov
Elisabeth Krenzien
Deutscher Wetterdienst (DWD) Kaiserleistrasse 42,63067, Offenbach am Main, Germany elisabeth.krenzien
[email protected]
Ng Kwok-leung
The Hong Kong Observatory 134aA Nathan Road, Kowloon, Hong Kong
[email protected]
Leif Laursen
DMI Lyngbyvej 100, DK-2 100 , Copenhagen 0, Denmark
[email protected]
Christopher Lazou
HiPerCom Consultants Ltd 10 Western Road, London, N2 9HX, United Kingdom chris @lazou.demon.co.uk
Vivian Lee
Environment Canada 2 121, Route Transcanadienne, Dorval, Quebec, Canada vivian.lee @ec.gc.ca
Richard Loft
National Centre for Atmospheric Research 1850 Table Mesa Drive, Boulder, CO 8035, USA
[email protected]
Alexander MacDonald
NOAA Earth System Research Laboratory 325 Broadway, RESRL, Boulder, CO 80305-3328, USA alexander.e.macdonald @noaa.gov
266
Andy Mason
Cray UK Limited 2 Brewery Court, High Street, Theale, Reading RG7 5AH, United Kingdom
[email protected]
Dirk Merten
Linux Networx Fraunhofer Institute, Kaiserslautern, Germany merten @itwm.fhg.de
Eduardo Monreal
Instituto Nacional de Meteorologia (INM), Apartado de Correos 285,2807 1, Madrid, Spain emonreal @inm.es
Dave Norton
The Portland Group 235 1 Wagon Train Trail, South Lake Tahoe CA, USA norton @ hpfa.com
Per Nyberg
Cray Inc. 3608 St. Charles Blvd., Suite 23, Kirland, Quebec H9H 3C3, Canada nyberg @cray.com
Motoi Okuda
Fujitsu Limited 9-3 Nakase 1-chome, Mihama-ku, Chiba City, Chiba 261-8588, Japan
[email protected]
Mike O'Neill
NEC Pinewood, Chineham Business Park, Basingstoke, Hants RG24 8AL, United Kingdom
[email protected]
267
Stephen Oxley
Met Office Fitzroy Road, Exeter, EX1 3PB, United Kingdom stephen.oxley @metoffiCe.g0v.uk
Jairo Panetta
INPEKPTEC Rod. President Dutra,, Km 39, 12630-000 Cahoeira Paulista SP, Brazil
[email protected]
Nikita Panov
Intel Lavrenteva st. 6/1, Novosibirsk, Russia
[email protected]
Hyei-Sun Park
Central Research Institute of Electric Power Industry (CRIEPI) 1646 Abiko Abikoshi, Chibaken 270-1 194, Japan
[email protected]
David Parks
NEC America 20 Mahogany Lane, Colarado Springs, CO 80906-790 I , USA david.parks@ necam.com
Alain Patoine
Meteorological Service of Canada 2 121 Trans-Canada Highway, Dorval, Quebec, Canada
[email protected]
Kim Petersen
IB M Nymollevej 9 1, DK-2800, Denmark
[email protected]
Eric Pitcher
Linux Networx 14944 Pony Express Road, Salt Lake City, Utah, 84065, USA
[email protected]
268
Christoph Pospiech
IBM Forststrasse 21, D-01099 Dresden, Germany
[email protected]
Douglass Post
DoD High Performance Computing Modernization Program 1010 North Glebe Road, Suite 510, Arlington, VA 22201, USA
[email protected]
Graham Riley
University of Manchester Room IT405, Kilburn Building, Oxford Road, Manchester, M13 9PL, United Kingdom graham.riley @ manchester.ac.uk
Anca Ristici
National Meteorological Administration Sos. Bucuresti - Ploiesti 97,013686 Bucharest, Romania ristici @meteo.inmh.ro
Mathis Rosenhauer
DKRZ Bundesstr. 55, D-20146 Hamburg, Germany
[email protected]
Ulrich Schattler
Deutscher Wetterdienst (DWD) Kaiserleistrasse 42, 63067, Offenbach am Main, Germany
[email protected]
Thomas Schoenemeyer
NEC High Performance Computing Europe GmbH Prinzenallee 1 1, D-40549 , Diisseldorf, Germany tschoenemeyer @hpce.nec.com
Paul Selwood
Met Office Fitzroy Road, Exeter, EX1 3PB, United Kingdom paul.selwood @metoffice.gov.uk
269
Roddy Sharp
Met Office Fitzroy Road, Exeter, EX1 3PB, United Kingdom rodsharp @ metoffice.gov.uk
Johan Silen
Finnish Meteorological Institute (FMI) Vuorikatu 24, PO Box 503, FIN-00101, Helsinki, Finland
[email protected]
Niko Sokka
Finnish Meteorological Institute (FMI) Vuorikatu 24, PO Box 503, FIN-00101, Helsinki, Finland niko.sokka@ fmi.fi
Henry Strauss
HP GmbH Einsteinring 6, 85609 Dornach, Germany henry .straws(9hp.com
Keiko Takahashi
The Earth Simulator CentedJAMSTEC 3 173-25 Showa-machi, Kanazawa-ku, Yokahama, Kanagawa 236-0001, Japan takahasi @jamstec.go.jp
Claude Talandier
CNRS LOCEAN-IPSL Tour 45-55 46me, 4, place Jussieu, 75252 Paris, France claude.talandier @lodyc.jussieu.fr
John Taylor
Streamline The Innovation Centre, Warwick Technology Park, Gallows Hill, Warwick CV34, United Kingdom johnt@ streamline-computing.com
Ulla Thiel
Cray Europe Waldhofer Strasse 102, 69123, Heidelberg, Germany ulla @ cray.com
270
Eckhard Tschirschnitz
Cray Computer Deutschland GmbH Wulfsdorfer Weg 66, D-22359, Hamburg, Germany eckhard @cray.com
Michel Valin
Environment Canada 2121 N. Trans Canada Hwy, Suite 500, Dorval, Quebec, Canada michel . d i n @ ec.gc.ca
George Vandenberghe
NOAMational Center for Environmental Prediction 5204 Auth Road, Camp Springs, MD 20746,
USA george.vandenberghe @noaa.gov Tirupathur Venkatesh
National Aerospace Laboratories PB-1779, Kodihalli, Bangalore, 560 017, India tnv @ aero.iisc.ernet.in
Ole Vignes
Norwegian Meteorological Institute PO Box 43, Blindern, NO-0313, Oslow, Norway
[email protected]
John Walsh
NEC
Pinewood Chineham Business Park, Basingstoke, Hants RG24 8AL, United Kingdom j walsh @ hpce.nec.com
Henning Weber
Deutscher Wetterdienst (DWD) Kaiserleistrasse 42, 63067, Offenbach am Main, Germany henning.weber @dwd.dc
271
Phil Webster
NASAIGoddard Space Flight Center Code 600, Goddard Spacelight Center, Greenbelt, Maryland, 20771, USA
[email protected]
Shen Wenhai
National Meteorological Information Center 46 Baishiqiao Rd, Beijing, 100081, PR of China.
Eoin Whelan
Met Eireann Glasnevin Hill, Dublin 9, Ireland
[email protected]
Tomas Wilhelmsson
SMHI SE-60176, Norrkoping, Sweden
[email protected]
Brice Womack
NASA GoddardINGC TASC GSFC Code 610.3, Bldg 28 Rm S238, Greenbelt, MD 2077 1, USA btwomack@pop900
Zong Xiang
China Meteorological Administration (CMA) 46 Baishiqiao Rd, Beijing, 100081, PR of China zongmy @cma.g0v.cn
This page intentionally left blank
ECMWF PARTICIPANTS Erick Andersson Sylvia Baylis Anton Beljaars Horst Bottger Philippe Bougeault Paul Burton Jens Daabeck Richard Fisher RCmy Giraud Mats Hamrud Alfred Hofstadler Mariano Hortal Lars Isaksen Peter Janssen Dominique Marbouty Martin Miller Stuart Mitchell Umberto Modigliani George Mozdzynski Tim Palmer Pam Prior David Richardson Sami Saarinen Deborah Salmond Adrian Simmons Neil Storer Jean-Noel Thepaut Yannick TrCmolet Saki Uppala Isabella Weger Walter Zwieflhofer
Head, Data Assimilation Section Head, Computer Operations Section Head, Physical Aspects Section Head, Meteorological Division Head, Research Department Numerical Aspects Section Head, Graphics Section Head, Servers & Desktops Section Head, Networking & Security Section Data Assimilation Section Head, Meteorological Applications Sectior: Head, Numerical Aspects Section Data Assimilation Section Head, Ocean Waves Section Director Head, Model Division Servers & Desktops Section Head, User Support Section Systems Software Section Head, Probabilistic Forecasting & Diagnostics Division User Support Section Head, Meteorological Operations Section Satellite Data Section Numerical Aspects Section Head, Data Division Head, Systems Software Section Head, Satellite Data Section Data Assimilation Section ERA Project Leader Head, Computer Division Head, Operations Department
273
This page intentionally left blank